public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] RISC-V: Support XTheadVector extensions
@ 2023-11-18  4:22 Jun Sha (Joshua)
  2023-11-18  4:26 ` [PATCH v2 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
                   ` (8 more replies)
  0 siblings, 9 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:22 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

This patch series presents gcc implementation of the XTheadVector
extension [1].

[1] https://github.com/T-head-Semi/thead-extension-spec/

I updated my patch series, because I forgot to add co-authors in
the last version.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

RISC-V: minimal support for xtheadvector
RISC-V: Handle differences between xtheadvector and vector
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part5)
RISC-V: Add support for xtheadvector-specific load/store intrinsics
RISC-V: Disable fractional type intrinsics for XTheadVector

---
 gcc/common/config/riscv/riscv-common.cc       |  10 +
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/riscv-c.cc                   |   8 +-
 gcc/config/riscv/riscv-protos.h               |   1 +
 .../riscv/riscv-vector-builtins-bases.cc      | 122 +++
 .../riscv/riscv-vector-builtins-bases.h       |  30 +
 .../riscv/riscv-vector-builtins-functions.def |   2 +
 .../riscv/riscv-vector-builtins-shapes.cc     | 122 +++
 .../riscv/riscv-vector-builtins-shapes.h      |   2 +
 .../riscv/riscv-vector-builtins-types.def     | 120 +++
 gcc/config/riscv/riscv-vector-builtins.cc     | 300 ++++++-
 gcc/config/riscv/riscv-vector-switch.def      | 144 ++--
 gcc/config/riscv/riscv.cc                     |  13 +-
 gcc/config/riscv/riscv.opt                    |   2 +
 gcc/config/riscv/riscv_th_vector.h            |  49 ++
 .../riscv/thead-vector-builtins-functions.def |  30 +
 gcc/config/riscv/thead-vector.md              | 235 ++++++
 gcc/config/riscv/vector-iterators.md          |   4 +
 gcc/config/riscv/vector.md                    | 778 +++++++++---------
 .../riscv/predef-__riscv_th_v_intrinsic.c     |  11 +
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 .../gcc.target/riscv/rvv/fractional-type.c    |  79 ++
 .../gcc.target/riscv/rvv/xtheadvector.c       |  13 +
 .../rvv/xtheadvector/autovec/vadd-run-nofm.c  |   4 +
 .../riscv/rvv/xtheadvector/autovec/vadd-run.c |  81 ++
 .../xtheadvector/autovec/vadd-rv32gcv-nofm.c  |  10 +
 .../rvv/xtheadvector/autovec/vadd-rv32gcv.c   |   8 +
 .../xtheadvector/autovec/vadd-rv64gcv-nofm.c  |  10 +
 .../rvv/xtheadvector/autovec/vadd-rv64gcv.c   |   8 +
 .../rvv/xtheadvector/autovec/vadd-template.h  |  70 ++
 .../rvv/xtheadvector/autovec/vadd-zvfh-run.c  |  54 ++
 .../riscv/rvv/xtheadvector/autovec/vand-run.c |  75 ++
 .../rvv/xtheadvector/autovec/vand-rv32gcv.c   |   7 +
 .../rvv/xtheadvector/autovec/vand-rv64gcv.c   |   7 +
 .../rvv/xtheadvector/autovec/vand-template.h  |  61 ++
 .../rvv/xtheadvector/binop_vv_constraint-1.c  |  68 ++
 .../rvv/xtheadvector/binop_vv_constraint-3.c  |  27 +
 .../rvv/xtheadvector/binop_vv_constraint-4.c  |  27 +
 .../rvv/xtheadvector/binop_vv_constraint-5.c  |  29 +
 .../rvv/xtheadvector/binop_vv_constraint-6.c  |  28 +
 .../rvv/xtheadvector/binop_vv_constraint-7.c  |  29 +
 .../rvv/xtheadvector/binop_vx_constraint-1.c  |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-10.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-11.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-12.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-13.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-14.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-15.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-16.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-17.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-18.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-19.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-2.c  |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-20.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-21.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-22.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-23.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-24.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-25.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-26.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-27.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-28.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-29.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-3.c  |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-30.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-31.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-32.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-33.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-34.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-35.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-36.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-37.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-38.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-39.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-4.c  |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-40.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-41.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-42.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-43.c |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-44.c |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-45.c | 123 +++
 .../rvv/xtheadvector/binop_vx_constraint-46.c |  72 ++
 .../rvv/xtheadvector/binop_vx_constraint-47.c |  16 +
 .../rvv/xtheadvector/binop_vx_constraint-48.c |  16 +
 .../rvv/xtheadvector/binop_vx_constraint-49.c |  16 +
 .../rvv/xtheadvector/binop_vx_constraint-5.c  |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-50.c |  18 +
 .../rvv/xtheadvector/binop_vx_constraint-6.c  |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-7.c  |  68 ++
 .../rvv/xtheadvector/binop_vx_constraint-8.c  |  73 ++
 .../rvv/xtheadvector/binop_vx_constraint-9.c  |  68 ++
 .../rvv/xtheadvector/rvv-xtheadvector.exp     |  41 +
 .../rvv/xtheadvector/ternop_vv_constraint-1.c |  83 ++
 .../rvv/xtheadvector/ternop_vv_constraint-2.c |  83 ++
 .../rvv/xtheadvector/ternop_vv_constraint-3.c |  83 ++
 .../rvv/xtheadvector/ternop_vv_constraint-4.c |  83 ++
 .../rvv/xtheadvector/ternop_vv_constraint-5.c |  83 ++
 .../rvv/xtheadvector/ternop_vv_constraint-6.c |  83 ++
 .../rvv/xtheadvector/ternop_vx_constraint-1.c |  71 ++
 .../rvv/xtheadvector/ternop_vx_constraint-2.c |  38 +
 .../rvv/xtheadvector/ternop_vx_constraint-3.c | 125 +++
 .../rvv/xtheadvector/ternop_vx_constraint-4.c | 123 +++
 .../rvv/xtheadvector/ternop_vx_constraint-5.c | 123 +++
 .../rvv/xtheadvector/ternop_vx_constraint-6.c | 130 +++
 .../rvv/xtheadvector/ternop_vx_constraint-7.c | 130 +++
 .../rvv/xtheadvector/ternop_vx_constraint-8.c |  71 ++
 .../rvv/xtheadvector/ternop_vx_constraint-9.c |  71 ++
 .../rvv/xtheadvector/unop_v_constraint-1.c    |  68 ++
 .../riscv/rvv/xtheadvector/vlb-vsb.c          |  68 ++
 .../riscv/rvv/xtheadvector/vlbu-vsb.c         |  68 ++
 .../riscv/rvv/xtheadvector/vlh-vsh.c          |  68 ++
 .../riscv/rvv/xtheadvector/vlhu-vsh.c         |  68 ++
 .../riscv/rvv/xtheadvector/vlw-vsw.c          |  68 ++
 .../riscv/rvv/xtheadvector/vlwu-vsw.c         |  68 ++
 114 files changed, 7455 insertions(+), 457 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 1/9] RISC-V: minimal support for xtheadvector
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
@ 2023-11-18  4:26 ` Jun Sha (Joshua)
  2023-11-18 10:06   ` Kito Cheng
  2023-11-18  4:28 ` [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:26 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

This patch is to introduce basic XTheadVector support
(march string parsing and a test for __riscv_xtheadvector)
according to https://github.com/T-head-Semi/thead-extension-spec/

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/ChangeLog:

	* common/config/riscv/riscv-common.cc
	(riscv_subset_list::parse): : Add new vendor extension.
	* config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):
	Add test marco.
	* config/riscv/riscv.opt: Add new mask.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/predef-__riscv_th_v_intrinsic.c: New test.
	* gcc.target/riscv/rvv/xtheadvector.c: New test.
---
 gcc/common/config/riscv/riscv-common.cc             | 10 ++++++++++
 gcc/config/riscv/riscv-c.cc                         |  4 ++++
 gcc/config/riscv/riscv.opt                          |  2 ++
 .../riscv/predef-__riscv_th_v_intrinsic.c           | 11 +++++++++++
 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c   | 13 +++++++++++++
 5 files changed, 40 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c

diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 526dbb7603b..914924171fd 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -75,6 +75,8 @@ static const riscv_implied_info_t riscv_implied_info[] =
 
   {"v", "zvl128b"},
   {"v", "zve64d"},
+  {"xtheadvector", "zvl128b"},
+  {"xtheadvector", "zve64d"},
 
   {"zve32f", "f"},
   {"zve64f", "f"},
@@ -325,6 +327,7 @@ static const struct riscv_ext_version riscv_ext_version_table[] =
   {"xtheadmemidx", ISA_SPEC_CLASS_NONE, 1, 0},
   {"xtheadmempair", ISA_SPEC_CLASS_NONE, 1, 0},
   {"xtheadsync", ISA_SPEC_CLASS_NONE, 1, 0},
+  {"xtheadvector", ISA_SPEC_CLASS_NONE, 1, 0},
 
   {"xventanacondops", ISA_SPEC_CLASS_NONE, 1, 0},
 
@@ -1495,6 +1498,10 @@ riscv_subset_list::parse (const char *arch, location_t loc)
     error_at (loc, "%<-march=%s%>: z*inx conflicts with floating-point "
 		   "extensions", arch);
 
+  if (subset_list->lookup ("v") && subset_list->lookup ("xtheadvector"))
+    error_at (loc, "%<-march=%s%>: xtheadvector conflicts with vector "
+		   "extensions", arch);
+
   /* 'H' hypervisor extension requires base ISA with 32 registers.  */
   if (subset_list->lookup ("e") && subset_list->lookup ("h"))
     error_at (loc, "%<-march=%s%>: h extension requires i extension", arch);
@@ -1680,6 +1687,9 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
   {"xtheadmemidx",  &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMIDX},
   {"xtheadmempair", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMPAIR},
   {"xtheadsync",    &gcc_options::x_riscv_xthead_subext, MASK_XTHEADSYNC},
+  {"xtheadvector",  &gcc_options::x_riscv_xthead_subext, MASK_XTHEADVECTOR},
+  {"xtheadvector",  &gcc_options::x_target_flags, MASK_FULL_V},
+  {"xtheadvector",  &gcc_options::x_target_flags, MASK_VECTOR},
 
   {"xventanacondops", &gcc_options::x_riscv_xventana_subext, MASK_XVENTANACONDOPS},
 
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index b7f9ba204f7..184fff905b2 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -137,6 +137,10 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
 				     riscv_ext_version_value (0, 11));
     }
 
+   if (TARGET_XTHEADVECTOR)
+     builtin_define_with_int_value ("__riscv_th_v_intrinsic",
+				     riscv_ext_version_value (0, 11));
+
   /* Define architecture extension test macros.  */
   builtin_define_with_int_value ("__riscv_arch_test", 1);
 
diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt
index 70d78151cee..72857aea352 100644
--- a/gcc/config/riscv/riscv.opt
+++ b/gcc/config/riscv/riscv.opt
@@ -438,6 +438,8 @@ Mask(XTHEADMEMPAIR) Var(riscv_xthead_subext)
 
 Mask(XTHEADSYNC)    Var(riscv_xthead_subext)
 
+Mask(XTHEADVECTOR)  Var(riscv_xthead_subext)
+
 TargetVariable
 int riscv_xventana_subext
 
diff --git a/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c b/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
new file mode 100644
index 00000000000..1c764241db6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64imafdcxtheadvector -mabi=lp64d" } */
+
+int main () {
+
+#if __riscv_th_v_intrinsic != 11000
+#error "__riscv_th_v_intrinsic"
+#endif
+
+  return 0;
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
new file mode 100644
index 00000000000..d52921e1314
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector" { target { rv32 } } } */
+/* { dg-options "-march=rv64gc_xtheadvector" { target { rv64 } } } */
+
+#ifndef __riscv_xtheadvector
+#error "Feature macro not defined"
+#endif
+
+int
+foo (int a)
+{
+  return a;
+}
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
  2023-11-18  4:26 ` [PATCH v2 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
@ 2023-11-18  4:28 ` Jun Sha (Joshua)
  2023-11-18 10:13   ` Kito Cheng
  2023-11-18  4:29 ` [PATCH v2 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:28 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

This patch is to handle the differences in instruction generation
between vector and xtheadvector, mainly adding th. prefix
to all xtheadvector instructions.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/ChangeLog:

	* config.gcc: Add header for XTheadVector intrinsics.
	* config/riscv/riscv-c.cc (riscv_pragma_intrinsic):
	Add XTheadVector.
	* config/riscv/riscv.cc (riscv_print_operand):
	Add new operand format directives.
	(riscv_print_operand_punct_valid_p): Likewise.
	* config/riscv/vector-iterators.md: Split any_int_unop
	for not and neg.
	* config/riscv/vector.md (@pred_<optab><mode>):
	Add th. for xtheadvector instructions.
	* config/riscv/riscv_th_vector.h: New file.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/riscv-c.cc                   |   4 +-
 gcc/config/riscv/riscv.cc                     |  11 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 ++
 gcc/config/riscv/vector-iterators.md          |   4 +
 gcc/config/riscv/vector.md                    | 777 +++++++++---------
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 7 files changed, 466 insertions(+), 383 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h

diff --git a/gcc/config.gcc b/gcc/config.gcc
index ba6d63e33ac..e0fc2b1a27c 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -548,7 +548,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index 184fff905b2..0a17d5f6656 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -194,8 +194,8 @@ riscv_pragma_intrinsic (cpp_reader *)
     {
       if (!TARGET_VECTOR)
 	{
-	  error ("%<#pragma riscv intrinsic%> option %qs needs 'V' extension "
-		 "enabled",
+	  error ("%<#pragma riscv intrinsic%> option %qs needs 'V' or "
+		 "'XTHEADVECTOR' extension enabled",
 		 name);
 	  return;
 	}
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index ecee7eb4727..754107cdaac 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5323,7 +5323,7 @@ riscv_get_v_regno_alignment (machine_mode mode)
 static void
 riscv_print_operand (FILE *file, rtx op, int letter)
 {
-  /* `~` does not take an operand so op will be null
+  /* `~` and '^' does not take an operand so op will be null
      Check for before accessing op.
   */
   if (letter == '~')
@@ -5332,6 +5332,13 @@ riscv_print_operand (FILE *file, rtx op, int letter)
 	fputc('w', file);
       return;
     }
+
+  if (letter == '^')
+    {
+      if (TARGET_XTHEADVECTOR)
+	fputs ("th.", file);
+      return;
+    }
   machine_mode mode = GET_MODE (op);
   enum rtx_code code = GET_CODE (op);
 
@@ -5584,7 +5591,7 @@ riscv_print_operand (FILE *file, rtx op, int letter)
 static bool
 riscv_print_operand_punct_valid_p (unsigned char code)
 {
-  return (code == '~');
+  return (code == '~' || code == '^');
 }
 
 /* Implement TARGET_PRINT_OPERAND_ADDRESS.  */
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..194652032bc
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index f04c7fe5491..4b1ba84750c 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3679,6 +3679,10 @@ (define_code_iterator any_int_binop [plus minus and ior xor ashift ashiftrt lshi
 
 (define_code_iterator any_int_unop [neg not])
 
+(define_code_iterator neg_unop [neg])
+
+(define_code_iterator not_unop [not])
+
 (define_code_iterator any_commutative_binop [plus and ior xor
   smax umax smin umin mult
 ])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index d1499d330ff..2af237854f9 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -1099,9 +1099,9 @@ (define_insn "*mov<mode>_whole"
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
   "TARGET_VECTOR"
   "@
-   vl%m1re<sew>.v\t%0,%1
-   vs%m1r.v\t%1,%0
-   vmv%m1r.v\t%0,%1"
+   * return TARGET_XTHEADVECTOR ? \"th.vl%m1re.v\t%0,%1\" : \"vl%m1re<sew>.v\t%0,%1\";
+   %^vs%m1r.v\t%1,%0
+   %^vmv%m1r.v\t%0,%1"
   [(set_attr "type" "vldr,vstr,vmov")
    (set_attr "mode" "<MODE>")])
 
@@ -1109,7 +1109,7 @@ (define_insn "*mov<mode>_fract"
   [(set (match_operand:V_FRACT 0 "register_operand" "=vr")
 	(match_operand:V_FRACT 1 "register_operand" " vr"))]
   "TARGET_VECTOR"
-  "vmv1r.v\t%0,%1"
+  "%^vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
 
@@ -1126,7 +1126,7 @@ (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
   "TARGET_VECTOR"
-  "vmv1r.v\t%0,%1"
+  "%^vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
 
@@ -1135,7 +1135,7 @@ (define_expand "@mov<V_FRACT:mode><P:mode>_lra"
     [(set (match_operand:V_FRACT 0 "reg_or_mem_operand")
 	  (match_operand:V_FRACT 1 "reg_or_mem_operand"))
    (clobber (match_scratch:P 2))])]
-  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"
 {})
 
 (define_expand "@mov<VB:mode><P:mode>_lra"
@@ -1143,14 +1143,14 @@ (define_expand "@mov<VB:mode><P:mode>_lra"
     [(set (match_operand:VB 0 "reg_or_mem_operand")
 	  (match_operand:VB 1 "reg_or_mem_operand"))
    (clobber (match_scratch:P 2))])]
-  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"
 {})
 
 (define_insn_and_split "*mov<V_FRACT:mode><P:mode>_lra"
   [(set (match_operand:V_FRACT 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_FRACT 1 "reg_or_mem_operand" "  m,vr,vr"))
    (clobber (match_scratch:P 2 "=&r,&r,X"))]
-  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"
   "#"
   "&& reload_completed"
   [(const_int 0)]
@@ -1172,7 +1172,7 @@ (define_insn_and_split "*mov<VB:mode><P:mode>_lra"
   [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:VB 1 "reg_or_mem_operand" "  m,vr,vr"))
    (clobber (match_scratch:P 2 "=&r,&r,X"))]
-  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"
   "#"
   "&& reload_completed"
   [(const_int 0)]
@@ -1258,7 +1258,7 @@ (define_insn_and_split "*mov<mode>"
   "@
    #
    #
-   vmv%m1r.v\t%0,%1"
+   %^vmv%m1r.v\t%0,%1"
   "&& reload_completed
    && (!register_operand (operands[0], <MODE>mode)
        || !register_operand (operands[1], <MODE>mode))"
@@ -1286,14 +1286,14 @@ (define_expand "@mov<VLS_AVL_REG:mode><P:mode>_lra"
     [(set (match_operand:VLS_AVL_REG 0 "reg_or_mem_operand")
 	  (match_operand:VLS_AVL_REG 1 "reg_or_mem_operand"))
    (clobber (match_scratch:P 2))])]
-  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"
 {})
 
 (define_insn_and_split "*mov<VLS_AVL_REG:mode><P:mode>_lra"
   [(set (match_operand:VLS_AVL_REG 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:VLS_AVL_REG 1 "reg_or_mem_operand" "  m,vr,vr"))
    (clobber (match_scratch:P 2 "=&r,&r,X"))]
-  "TARGET_VECTOR && (lra_in_progress || reload_completed)
+  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)
    && (register_operand (operands[0], <VLS_AVL_REG:MODE>mode)
        || register_operand (operands[1], <VLS_AVL_REG:MODE>mode))"
   "#"
@@ -1322,7 +1322,7 @@ (define_insn "*mov<mode>_vls"
   [(set (match_operand:VLS 0 "register_operand" "=vr")
 	(match_operand:VLS 1 "register_operand" " vr"))]
   "TARGET_VECTOR"
-  "vmv%m1r.v\t%0,%1"
+  "%^vmv%m1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
 
@@ -1330,7 +1330,7 @@ (define_insn "*mov<mode>_vls"
   [(set (match_operand:VLSB 0 "register_operand" "=vr")
 	(match_operand:VLSB 1 "register_operand" " vr"))]
   "TARGET_VECTOR"
-  "vmv1r.v\t%0,%1"
+  "%^vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
 
@@ -1359,7 +1359,7 @@ (define_expand "movmisalign<mode>"
 (define_expand "movmisalign<mode>"
   [(set (match_operand:V 0 "nonimmediate_operand")
 	(match_operand:V 1 "general_operand"))]
-  "TARGET_VECTOR && TARGET_VECTOR_MISALIGN_SUPPORTED"
+  "TARGET_VECTOR &&  TARGET_VECTOR_MISALIGN_SUPPORTED"
   {
     emit_move_insn (operands[0], operands[1]);
     DONE;
@@ -1396,7 +1396,7 @@ (define_insn_and_split "*vec_duplicate<mode>"
   [(set (match_operand:V_VLS 0 "register_operand")
         (vec_duplicate:V_VLS
           (match_operand:<VEL> 1 "direct_broadcast_operand")))]
-  "TARGET_VECTOR && can_create_pseudo_p ()"
+  "TARGET_VECTOR &&  can_create_pseudo_p ()"
   "#"
   "&& 1"
   [(const_int 0)]
@@ -1530,7 +1530,7 @@ (define_insn "@vsetvl<mode>"
 		    (match_dup 4)
 		    (match_dup 5)] UNSPEC_VSETVL))]
   "TARGET_VECTOR"
-  "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
+  "%^vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "<MODE>")
    (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
@@ -1548,7 +1548,7 @@ (define_insn "vsetvl_vtype_change_only"
 	   (match_operand 2 "const_int_operand" "i")
 	   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
   "TARGET_VECTOR"
-  "vsetvli\tzero,zero,e%0,%m1,t%p2,m%p3"
+  "%^vsetvli\tzero,zero,e%0,%m1,t%p2,m%p3"
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "SI")
    (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
@@ -1570,7 +1570,7 @@ (define_insn "@vsetvl_discard_result<mode>"
 		    (match_operand 3 "const_int_operand" "i")
 		    (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
   "TARGET_VECTOR"
-  "vset%i0vli\tzero,%0,e%1,%m2,t%p3,m%p4"
+  "%^vset%i0vli\tzero,%0,e%1,%m2,t%p3,m%p4"
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "<MODE>")
    (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
@@ -1720,12 +1720,12 @@ (define_insn_and_split "*pred_mov<mode>"
     && (register_operand (operands[0], <MODE>mode)
         || register_operand (operands[3], <MODE>mode)))"
   "@
-   vle<sew>.v\t%0,%3%p1
-   vle<sew>.v\t%0,%3
-   vle<sew>.v\t%0,%3,%1.t
-   vse<sew>.v\t%3,%0%p1
-   vmv.v.v\t%0,%3
-   vmv.v.v\t%0,%3"
+   * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3%p1\" : \"vle<sew>.v\t%0,%3%p1\";
+   * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3\" : \"vle<sew>.v\t%0,%3\";
+   * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3,%1.t\" : \"vle<sew>.v\t%0,%3,%1.t\";
+   * return TARGET_XTHEADVECTOR ? \"th.vse.v\t%3,%0%p1\" : \"vse<sew>.v\t%3,%0%p1\";
+   %^vmv.v.v\t%0,%3
+   %^vmv.v.v\t%0,%3"
   "&& register_operand (operands[0], <MODE>mode)
    && register_operand (operands[3], <MODE>mode)
    && satisfies_constraint_vu (operands[2])
@@ -1749,7 +1749,7 @@ (define_insn "@pred_store<mode>"
 	  (match_operand:V 2 "register_operand"         "    vr")
 	  (match_dup 0)))]
   "TARGET_VECTOR"
-  "vse<sew>.v\t%2,%0%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vse.v\t%2,%0%p1" : "vse<sew>.v\t%2,%0%p1"; }
   [(set_attr "type" "vste")
    (set_attr "mode" "<MODE>")
    (set (attr "avl_type_idx") (const_int 4))
@@ -1773,11 +1773,11 @@ (define_insn_and_split "@pred_mov<mode>"
 	  (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]
   "TARGET_VECTOR"
   "@
-   vlm.v\t%0,%3
-   vsm.v\t%3,%0
-   vmmv.m\t%0,%3
-   vmclr.m\t%0
-   vmset.m\t%0"
+   %^vlm.v\t%0,%3
+   %^vsm.v\t%3,%0
+   %^vmmv.m\t%0,%3
+   %^vmclr.m\t%0
+   %^vmset.m\t%0"
   "&& register_operand (operands[0], <MODE>mode)
    && register_operand (operands[3], <MODE>mode)
    && INTVAL (operands[5]) == riscv_vector::VLMAX"
@@ -1800,7 +1800,7 @@ (define_insn "@pred_store<mode>"
 	  (match_operand:VB 2 "register_operand"                 " vr")
 	  (match_dup 0)))]
   "TARGET_VECTOR"
-  "vsm.v\t%2,%0"
+  "%^vsm.v\t%2,%0"
   [(set_attr "type" "vstm")
    (set_attr "mode" "<MODE>")
    (set (attr "avl_type_idx") (const_int 4))
@@ -1821,7 +1821,7 @@ (define_insn "@pred_merge<mode>"
 	(match_operand:<VM> 4 "register_operand"     " vm,vm,vm,vm"))
       (match_operand:V_VLS 1 "vector_merge_operand"      " vu, 0,vu, 0")))]
   "TARGET_VECTOR"
-  "vmerge.v%o3m\t%0,%2,%v3,%4"
+  "%^vmerge.v%o3m\t%0,%2,%v3,%4"
   [(set_attr "type" "vimerge")
    (set_attr "mode" "<MODE>")])
 
@@ -1841,7 +1841,7 @@ (define_insn "@pred_merge<mode>_scalar"
 	(match_operand:<VM> 4 "register_operand"     " vm,vm"))
       (match_operand:V_VLSI_QHS 1 "vector_merge_operand" " vu, 0")))]
   "TARGET_VECTOR"
-  "vmerge.vxm\t%0,%2,%3,%4"
+  "%^vmerge.vxm\t%0,%2,%3,%4"
   [(set_attr "type" "vimerge")
    (set_attr "mode" "<MODE>")])
 
@@ -1893,7 +1893,7 @@ (define_insn "*pred_merge<mode>_scalar"
 	(match_operand:<VM> 4 "register_operand"     " vm,vm"))
       (match_operand:V_VLSI_D 1 "vector_merge_operand"   " vu, 0")))]
   "TARGET_VECTOR"
-  "vmerge.vxm\t%0,%2,%3,%4"
+  "%^vmerge.vxm\t%0,%2,%3,%4"
   [(set_attr "type" "vimerge")
    (set_attr "mode" "<MODE>")])
 
@@ -1914,7 +1914,7 @@ (define_insn "*pred_merge<mode>_extended_scalar"
 	(match_operand:<VM> 4 "register_operand"         " vm,vm"))
       (match_operand:V_VLSI_D 1 "vector_merge_operand"       " vu, 0")))]
   "TARGET_VECTOR"
-  "vmerge.vxm\t%0,%2,%3,%4"
+  "%^vmerge.vxm\t%0,%2,%3,%4"
   [(set_attr "type" "vimerge")
    (set_attr "mode" "<MODE>")])
 
@@ -2004,14 +2004,14 @@ (define_insn_and_split "*pred_broadcast<mode>"
 	  (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]
   "TARGET_VECTOR"
   "@
-   vmv.v.x\t%0,%3
-   vmv.v.x\t%0,%3
-   vlse<sew>.v\t%0,%3,zero,%1.t
-   vlse<sew>.v\t%0,%3,zero,%1.t
-   vlse<sew>.v\t%0,%3,zero
-   vlse<sew>.v\t%0,%3,zero
-   vmv.s.x\t%0,%3
-   vmv.s.x\t%0,%3"
+   %^vmv.v.x\t%0,%3
+   %^vmv.v.x\t%0,%3
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+   %^vmv.s.x\t%0,%3
+   %^vmv.s.x\t%0,%3"
   "(register_operand (operands[3], <VEL>mode)
   || CONST_POLY_INT_P (operands[3]))
   && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
@@ -2065,14 +2065,14 @@ (define_insn "*pred_broadcast<mode>"
 	  (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]
   "TARGET_VECTOR"
   "@
-   vfmv.v.f\t%0,%3
-   vfmv.v.f\t%0,%3
-   vlse<sew>.v\t%0,%3,zero,%1.t
-   vlse<sew>.v\t%0,%3,zero,%1.t
-   vlse<sew>.v\t%0,%3,zero
-   vlse<sew>.v\t%0,%3,zero
-   vfmv.s.f\t%0,%3
-   vfmv.s.f\t%0,%3"
+   %^vfmv.v.f\t%0,%3
+   %^vfmv.v.f\t%0,%3
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+   * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+   %^vfmv.s.f\t%0,%3
+   %^vfmv.s.f\t%0,%3"
   [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
    (set_attr "mode" "<MODE>")])
 
@@ -2093,10 +2093,10 @@ (define_insn "*pred_broadcast<mode>_extended_scalar"
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"          "vu,  0, vu,  0")))]
   "TARGET_VECTOR"
   "@
-   vmv.v.x\t%0,%3
-   vmv.v.x\t%0,%3
-   vmv.s.x\t%0,%3
-   vmv.s.x\t%0,%3"
+   %^vmv.v.x\t%0,%3
+   %^vmv.v.x\t%0,%3
+   %^vmv.s.x\t%0,%3
+   %^vmv.s.x\t%0,%3"
   [(set_attr "type" "vimov,vimov,vimovxv,vimovxv")
    (set_attr "mode" "<MODE>")])
 
@@ -2114,7 +2114,7 @@ (define_insn "*pred_broadcast<mode>_zero"
       (match_operand:V_VLS 3 "vector_const_0_operand"                      "Wc0,   Wc0")
       (match_operand:V_VLS 2 "vector_merge_operand"                        " vu,     0")))]
   "TARGET_VECTOR"
-  "vmv.s.x\t%0,zero"
+  "%^vmv.s.x\t%0,zero"
   [(set_attr "type" "vimovxv,vimovxv")
    (set_attr "mode" "<MODE>")])
 
@@ -2134,7 +2134,7 @@ (define_insn "*pred_broadcast<mode>_imm"
       (match_operand:V_VLS 3 "vector_const_int_or_double_0_operand" "viWc0, viWc0")
       (match_operand:V_VLS 2 "vector_merge_operand"                 "   vu,     0")))]
   "TARGET_VECTOR"
-  "vmv.v.i\t%0,%v3"
+  "%^vmv.v.i\t%0,%v3"
   [(set_attr "type" "vimov,vimov")
    (set_attr "mode" "<MODE>")])
 
@@ -2162,12 +2162,12 @@ (define_insn "@pred_strided_load<mode>"
 	  (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]
   "TARGET_VECTOR"
   "@
-  vlse<sew>.v\t%0,%3,%z4%p1
-  vlse<sew>.v\t%0,%3,%z4
-  vlse<sew>.v\t%0,%3,%z4,%1.t
-  vle<sew>.v\t%0,%3%p1
-  vle<sew>.v\t%0,%3
-  vle<sew>.v\t%0,%3,%1.t"
+  * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,%z4%p1\" : \"vlse<sew>.v\t%0,%3,%z4%p1\";
+  * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,%z4\" : \"vlse<sew>.v\t%0,%3,%z4\";
+  * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,%z4,%1.t\" : \"vlse<sew>.v\t%0,%3,%z4,%1.t\";
+  * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3%p1\" : \"vle<sew>.v\t%0,%3%p1\";
+  * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3\" : \"vle<sew>.v\t%0,%3\";
+  * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3,%1.t\" : \"vle<sew>.v\t%0,%3,%1.t\";"
   [(set_attr "type" "vlds")
    (set_attr "mode" "<MODE>")])
 
@@ -2186,8 +2186,8 @@ (define_insn "@pred_strided_store<mode>"
 	  (match_dup 0)))]
   "TARGET_VECTOR"
   "@
-  vsse<sew>.v\t%3,%0,%z2%p1
-  vse<sew>.v\t%3,%0%p1"
+  * return TARGET_XTHEADVECTOR ? \"th.vsse.v\t%3,%0,%z2%p1\" : \"vsse<sew>.v\t%3,%0,%z2%p1\";
+  * return TARGET_XTHEADVECTOR ? \"th.vse.v\t%3,%0%p1\" : \"vse<sew>.v\t%3,%0%p1\";"
   [(set_attr "type" "vsts")
    (set_attr "mode" "<MODE>")
    (set (attr "avl_type_idx") (const_int 5))])
@@ -2217,7 +2217,7 @@ (define_insn "@pred_indexed_<order>load<mode>_same_eew"
 	     (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
 	  (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
   "TARGET_VECTOR"
-  "vl<order>xei<sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxe.v\t%0,(%z3),%4%p1" : "vl<order>xei<sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vld<order>x")
    (set_attr "mode" "<MODE>")])
 
@@ -2498,18 +2498,18 @@ (define_insn "@pred_<optab><mode>"
 	  (match_operand:V_VLSI 2 "vector_merge_operand"     "vu,0,vu,0,vu,0,vu,0,vu,0,vu,0")))]
   "TARGET_VECTOR"
   "@
-   v<insn>.vv\t%0,%3,%4%p1
-   v<insn>.vv\t%0,%3,%4%p1
-   v<insn>.vv\t%0,%3,%4%p1
-   v<insn>.vv\t%0,%3,%4%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
-   v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
-   v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
-   v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1"
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
+   %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
+   %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
+   %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2533,7 +2533,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	    (match_operand 4 "pmode_reg_or_uimm5_operand" " r, r,  r,  r, K, K,  K,  K"))
 	  (match_operand:V_VLSI 2 "vector_merge_operand"      "vu, 0, vu,  0,vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.v%o4\t%0,%3,%4%p1"
+  "%^v<insn>.v%o4\t%0,%3,%4%p1"
   [(set_attr "type" "vshift")
    (set_attr "mode" "<MODE>")])
 
@@ -2555,7 +2555,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	    (match_operand:V_VLSI_QHS 3 "register_operand"   "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI_QHS 2 "vector_merge_operand" "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%z4%p1"
+  "%^v<insn>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2576,7 +2576,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	      (match_operand:<VEL> 4 "reg_or_0_operand"  "rJ,rJ, rJ, rJ")))
 	  (match_operand:V_VLSI_QHS 2 "vector_merge_operand" "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%z4%p1"
+  "%^v<insn>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2597,7 +2597,7 @@ (define_insn "@pred_sub<mode>_reverse_scalar"
 	    (match_operand:V_VLSI_QHS 3 "register_operand"   "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI_QHS 2 "vector_merge_operand" "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vrsub.vx\t%0,%3,%z4%p1"
+  "%^vrsub.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vialu")
    (set_attr "mode" "<MODE>")])
 
@@ -2653,7 +2653,7 @@ (define_insn "*pred_<optab><mode>_scalar"
 	    (match_operand:V_VLSI_D 3 "register_operand"     "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"   "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%z4%p1"
+  "%^v<insn>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2675,7 +2675,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
 	    (match_operand:V_VLSI_D 3 "register_operand"         "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"       "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%z4%p1"
+  "%^v<insn>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2729,7 +2729,7 @@ (define_insn "*pred_<optab><mode>_scalar"
 	      (match_operand:<VEL> 4 "reg_or_0_operand"  "rJ,rJ, rJ, rJ")))
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"   "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%z4%p1"
+  "%^v<insn>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2751,7 +2751,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
 	        (match_operand:<VSUBEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ"))))
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"       "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%z4%p1"
+  "%^v<insn>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -2805,7 +2805,7 @@ (define_insn "*pred_sub<mode>_reverse_scalar"
 	    (match_operand:V_VLSI_D 3 "register_operand"     "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"   "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vrsub.vx\t%0,%3,%z4%p1"
+  "%^vrsub.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vialu")
    (set_attr "mode" "<MODE>")])
 
@@ -2827,7 +2827,7 @@ (define_insn "*pred_sub<mode>_extended_reverse_scalar"
 	    (match_operand:V_VLSI_D 3 "register_operand"         "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI_D 2 "vector_merge_operand"       "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vrsub.vx\t%0,%3,%z4%p1"
+  "%^vrsub.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vialu")
    (set_attr "mode" "<MODE>")])
 
@@ -2848,7 +2848,7 @@ (define_insn "@pred_mulh<v_su><mode>"
 	     (match_operand:VFULLI 4 "register_operand"  "vr,vr, vr, vr")] VMULH)
 	  (match_operand:VFULLI 2 "vector_merge_operand" "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vmulh<v_su>.vv\t%0,%3,%4%p1"
+  "%^vmulh<v_su>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vimul")
    (set_attr "mode" "<MODE>")])
 
@@ -2869,7 +2869,7 @@ (define_insn "@pred_mulh<v_su><mode>_scalar"
 	     (match_operand:VI_QHS 3 "register_operand"   "vr,vr, vr, vr")] VMULH)
 	  (match_operand:VI_QHS 2 "vector_merge_operand"  "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vmulh<v_su>.vx\t%0,%3,%z4%p1"
+  "%^vmulh<v_su>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vimul")
    (set_attr "mode" "<MODE>")])
 
@@ -2923,7 +2923,7 @@ (define_insn "*pred_mulh<v_su><mode>_scalar"
 	     (match_operand:VFULLI_D 3 "register_operand"  "vr,vr, vr, vr")] VMULH)
 	  (match_operand:VFULLI_D 2 "vector_merge_operand" "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vmulh<v_su>.vx\t%0,%3,%z4%p1"
+  "%^vmulh<v_su>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vimul")
    (set_attr "mode" "<MODE>")])
 
@@ -2945,7 +2945,7 @@ (define_insn "*pred_mulh<v_su><mode>_extended_scalar"
 	     (match_operand:VFULLI_D 3 "register_operand"     "vr,vr, vr, vr")] VMULH)
 	  (match_operand:VFULLI_D 2 "vector_merge_operand"    "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "vmulh<v_su>.vx\t%0,%3,%z4%p1"
+  "%^vmulh<v_su>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vimul")
    (set_attr "mode" "<MODE>")])
 
@@ -2966,7 +2966,7 @@ (define_insn "@pred_adc<mode>"
 	     (match_operand:<VM> 4 "register_operand"     "vm,vm,vm,vm")] UNSPEC_VADC)
 	  (match_operand:VI 1 "vector_merge_operand"      "vu, 0,vu, 0")))]
   "TARGET_VECTOR"
-  "vadc.v%o3m\t%0,%2,%v3,%4"
+  "%^vadc.v%o3m\t%0,%2,%v3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -2990,7 +2990,7 @@ (define_insn "@pred_sbc<mode>"
 	      (match_operand:<VM> 4 "register_operand"    "vm,vm")] UNSPEC_VSBC)
 	  (match_operand:VI 1 "vector_merge_operand"      "vu, 0")))]
   "TARGET_VECTOR"
-  "vsbc.vvm\t%0,%2,%3,%4"
+  "%^vsbc.vvm\t%0,%2,%3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3015,7 +3015,7 @@ (define_insn "@pred_adc<mode>_scalar"
 	     (match_operand:<VM> 4 "register_operand"      "vm,vm")] UNSPEC_VADC)
 	  (match_operand:VI_QHS 1 "vector_merge_operand"   "vu, 0")))]
   "TARGET_VECTOR"
-  "vadc.vxm\t%0,%2,%3,%4"
+  "%^vadc.vxm\t%0,%2,%3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3040,7 +3040,7 @@ (define_insn "@pred_sbc<mode>_scalar"
 	      (match_operand:<VM> 4 "register_operand"      "vm,vm")] UNSPEC_VSBC)
 	  (match_operand:VI_QHS 1 "vector_merge_operand"    "vu, 0")))]
   "TARGET_VECTOR"
-  "vsbc.vxm\t%0,%2,%z3,%4"
+  "%^vsbc.vxm\t%0,%2,%z3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3098,7 +3098,7 @@ (define_insn "*pred_adc<mode>_scalar"
 	      (match_operand:<VM> 4 "register_operand"      "vm,vm")] UNSPEC_VADC)
 	  (match_operand:VI_D 1 "vector_merge_operand"      "vu, 0")))]
   "TARGET_VECTOR"
-  "vadc.vxm\t%0,%2,%z3,%4"
+  "%^vadc.vxm\t%0,%2,%z3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3124,7 +3124,7 @@ (define_insn "*pred_adc<mode>_extended_scalar"
 	      (match_operand:<VM> 4 "register_operand"           "vm,vm")] UNSPEC_VADC)
 	  (match_operand:VI_D 1 "vector_merge_operand"           "vu, 0")))]
   "TARGET_VECTOR"
-  "vadc.vxm\t%0,%2,%z3,%4"
+  "%^vadc.vxm\t%0,%2,%z3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3182,7 +3182,7 @@ (define_insn "*pred_sbc<mode>_scalar"
 	      (match_operand:<VM> 4 "register_operand"      "vm,vm")] UNSPEC_VSBC)
 	  (match_operand:VI_D 1 "vector_merge_operand"      "vu, 0")))]
   "TARGET_VECTOR"
-  "vsbc.vxm\t%0,%2,%z3,%4"
+  "%^vsbc.vxm\t%0,%2,%z3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3208,7 +3208,7 @@ (define_insn "*pred_sbc<mode>_extended_scalar"
 	      (match_operand:<VM> 4 "register_operand"           "vm,vm")] UNSPEC_VSBC)
 	  (match_operand:VI_D 1 "vector_merge_operand"           "vu, 0")))]
   "TARGET_VECTOR"
-  "vsbc.vxm\t%0,%2,%z3,%4"
+  "%^vsbc.vxm\t%0,%2,%z3,%4"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -3229,7 +3229,7 @@ (define_insn "@pred_madc<mode>"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
   "TARGET_VECTOR"
-  "vmadc.v%o2m\t%0,%1,%v2,%3"
+  "%^vmadc.v%o2m\t%0,%1,%v2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3248,7 +3248,7 @@ (define_insn "@pred_msbc<mode>"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
   "TARGET_VECTOR"
-  "vmsbc.vvm\t%0,%1,%2,%3"
+  "%^vmsbc.vvm\t%0,%1,%2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3268,7 +3268,7 @@ (define_insn "@pred_madc<mode>_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
   "TARGET_VECTOR"
-  "vmadc.vxm\t%0,%1,%2,%3"
+  "%^vmadc.vxm\t%0,%1,%2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3288,7 +3288,7 @@ (define_insn "@pred_msbc<mode>_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
   "TARGET_VECTOR"
-  "vmsbc.vxm\t%0,%1,%z2,%3"
+  "%^vmsbc.vxm\t%0,%1,%z2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3337,7 +3337,7 @@ (define_insn "*pred_madc<mode>_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
   "TARGET_VECTOR"
-  "vmadc.vxm\t%0,%1,%z2,%3"
+  "%^vmadc.vxm\t%0,%1,%z2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3358,7 +3358,7 @@ (define_insn "*pred_madc<mode>_extended_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
   "TARGET_VECTOR"
-  "vmadc.vxm\t%0,%1,%z2,%3"
+  "%^vmadc.vxm\t%0,%1,%z2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3407,7 +3407,7 @@ (define_insn "*pred_msbc<mode>_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
   "TARGET_VECTOR"
-  "vmsbc.vxm\t%0,%1,%z2,%3"
+  "%^vmsbc.vxm\t%0,%1,%z2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3428,7 +3428,7 @@ (define_insn "*pred_msbc<mode>_extended_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
   "TARGET_VECTOR"
-  "vmsbc.vxm\t%0,%1,%z2,%3"
+  "%^vmsbc.vxm\t%0,%1,%z2,%3"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3446,7 +3446,7 @@ (define_insn "@pred_madc<mode>_overflow"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmadc.v%o2\t%0,%1,%v2"
+  "%^vmadc.v%o2\t%0,%1,%v2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3464,7 +3464,7 @@ (define_insn "@pred_msbc<mode>_overflow"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmsbc.vv\t%0,%1,%2"
+  "%^vmsbc.vv\t%0,%1,%2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3483,7 +3483,7 @@ (define_insn "@pred_madc<mode>_overflow_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmadc.vx\t%0,%1,%z2"
+  "%^vmadc.vx\t%0,%1,%z2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3502,7 +3502,7 @@ (define_insn "@pred_msbc<mode>_overflow_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmsbc.vx\t%0,%1,%z2"
+  "%^vmsbc.vx\t%0,%1,%z2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3549,7 +3549,7 @@ (define_insn "*pred_madc<mode>_overflow_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmadc.vx\t%0,%1,%z2"
+  "%^vmadc.vx\t%0,%1,%z2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3569,7 +3569,7 @@ (define_insn "*pred_madc<mode>_overflow_extended_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmadc.vx\t%0,%1,%z2"
+  "%^vmadc.vx\t%0,%1,%z2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3616,7 +3616,7 @@ (define_insn "*pred_msbc<mode>_overflow_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmsbc.vx\t%0,%1,%z2"
+  "%^vmsbc.vx\t%0,%1,%z2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3636,7 +3636,7 @@ (define_insn "*pred_msbc<mode>_overflow_extended_scalar"
 	       (reg:SI VL_REGNUM)
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
   "TARGET_VECTOR"
-  "vmsbc.vx\t%0,%1,%z2"
+  "%^vmsbc.vx\t%0,%1,%z2"
   [(set_attr "type" "vicalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "3")
@@ -3660,11 +3660,34 @@ (define_insn "@pred_<optab><mode>"
 	     (match_operand 7 "const_int_operand"        " i, i,  i,  i")
 	     (reg:SI VL_REGNUM)
 	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
-	  (any_int_unop:V_VLSI
+	  (not_unop:V_VLSI
 	    (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))
 	  (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.v\t%0,%3%p1"
+  "%^vnot.v\t%0,%3%p1"
+  [(set_attr "type" "vialu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_<optab><mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand"	 "=vd,vd, vr, vr")
+	(if_then_else:V_VLSI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")
+	     (match_operand 5 "const_int_operand"	 " i, i,  i,  i")
+	     (match_operand 6 "const_int_operand"	 " i, i,  i,  i")
+	     (match_operand 7 "const_int_operand"	 " i, i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (neg_unop:V_VLSI
+	    (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))
+	  (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]
+  "TARGET_VECTOR"
+  { return TARGET_XTHEADVECTOR ? "th.vrsub.vx\t%0,%3,x0%p1" : "vneg.v\t%0,%3%p1"; }
   [(set_attr "type" "vialu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -3696,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf2"
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"         "   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")])
@@ -3716,7 +3739,7 @@ (define_insn "@pred_<optab><mode>_vf4"
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand" "   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"       "   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")])
@@ -3736,7 +3759,7 @@ (define_insn "@pred_<optab><mode>_vf8"
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand" "   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"      "   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")])
@@ -3760,7 +3783,7 @@ (define_insn "@pred_dual_widen_<any_widen_binop:optab><any_extend:su><mode>"
 	      (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vw<any_widen_binop:insn><any_extend:u>.vv\t%0,%3,%4%p1"
+  "%^vw<any_widen_binop:insn><any_extend:u>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vi<widen_binop_insn_type>")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3783,7 +3806,7 @@ (define_insn "@pred_dual_widen_<any_widen_binop:optab><any_extend:su><mode>_scal
 		(match_operand:<VSUBEL> 4 "reg_or_0_operand"       "   rJ,   rJ"))))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vw<any_widen_binop:insn><any_extend:u>.vx\t%0,%3,%z4%p1"
+  "%^vw<any_widen_binop:insn><any_extend:u>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vi<widen_binop_insn_type>")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3804,7 +3827,7 @@ (define_insn "@pred_single_widen_sub<any_extend:su><mode>"
 	      (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vwsub<any_extend:u>.wv\t%0,%3,%4%p1"
+  "%^vwsub<any_extend:u>.wv\t%0,%3,%4%p1"
   [(set_attr "type" "viwalu")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3825,7 +3848,7 @@ (define_insn "@pred_single_widen_add<any_extend:su><mode>"
 	    (match_operand:VWEXTI 3 "register_operand"             "   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vwadd<any_extend:u>.wv\t%0,%3,%4%p1"
+  "%^vwadd<any_extend:u>.wv\t%0,%3,%4%p1"
   [(set_attr "type" "viwalu")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3847,7 +3870,7 @@ (define_insn "@pred_single_widen_<plus_minus:optab><any_extend:su><mode>_scalar"
 		(match_operand:<VSUBEL> 4 "reg_or_0_operand"       "   rJ,   rJ"))))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vw<plus_minus:insn><any_extend:u>.wx\t%0,%3,%z4%p1"
+  "%^vw<plus_minus:insn><any_extend:u>.wx\t%0,%3,%z4%p1"
   [(set_attr "type" "vi<widen_binop_insn_type>")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3869,7 +3892,7 @@ (define_insn "@pred_widen_mulsu<mode>"
 	      (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vwmulsu.vv\t%0,%3,%4%p1"
+  "%^vwmulsu.vv\t%0,%3,%4%p1"
   [(set_attr "type" "viwmul")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3892,7 +3915,7 @@ (define_insn "@pred_widen_mulsu<mode>_scalar"
 		(match_operand:<VSUBEL> 4 "reg_or_0_operand"       "   rJ,   rJ"))))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vwmulsu.vx\t%0,%3,%z4%p1"
+  "%^vwmulsu.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "viwmul")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3915,7 +3938,7 @@ (define_insn "@pred_<optab><mode>"
 	      (reg:<VEL> X0_REGNUM)))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vwcvt<u>.x.x.v\t%0,%3%p1"
+  "%^vwcvt<u>.x.x.v\t%0,%3%p1"
   [(set_attr "type" "viwalu")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set_attr "vl_op_idx" "4")
@@ -3950,7 +3973,7 @@ (define_insn "@pred_narrow_<optab><mode>"
 	     (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  0, 0,  0,  0,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  0,vu,  0, vu,vu, vu,   vu,    0, vu, vu,   vu,    0")))]
   "TARGET_VECTOR"
-  "vn<insn>.w%o4\t%0,%3,%v4%p1"
+  "%^vn<insn>.w%o4\t%0,%3,%v4%p1"
   [(set_attr "type" "vnshift")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3971,7 +3994,7 @@ (define_insn "@pred_narrow_<optab><mode>_scalar"
 	     (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  0, vu,  0,   vu,    0")))]
   "TARGET_VECTOR"
-  "vn<insn>.w%o4\t%0,%3,%4%p1"
+  "%^vn<insn>.w%o4\t%0,%3,%4%p1"
   [(set_attr "type" "vnshift")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -3991,7 +4014,7 @@ (define_insn "@pred_trunc<mode>"
 	    (match_operand:VWEXTI 3 "register_operand"                 "  0,  0,  0,  0,   vr,   vr"))
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  0, vu,  0,   vu,    0")))]
   "TARGET_VECTOR"
-  "vncvt.x.x.w\t%0,%3%p1"
+  "%^vncvt.x.x.w\t%0,%3%p1"
   [(set_attr "type" "vnshift")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set_attr "vl_op_idx" "4")
@@ -4028,14 +4051,14 @@ (define_insn "@pred_<optab><mode>"
 	  (match_operand:VI 2 "vector_merge_operand"     " vu,  0, vu,  0, vu,  0, vu,  0")))]
   "TARGET_VECTOR"
   "@
-   v<insn>.vv\t%0,%3,%4%p1
-   v<insn>.vv\t%0,%3,%4%p1
-   v<insn>.vv\t%0,%3,%4%p1
-   v<insn>.vv\t%0,%3,%4%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
-   v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1"
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<insn>.vv\t%0,%3,%4%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+   %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4057,7 +4080,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	    (match_operand:VI_QHS 3 "register_operand"   " vr, vr, vr, vr"))
 	  (match_operand:VI_QHS 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%4%p1"
+  "%^v<insn>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4078,7 +4101,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	      (match_operand:<VEL> 4 "register_operand"  "  r,  r,  r,  r")))
 	  (match_operand:VI_QHS 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%4%p1"
+  "%^v<insn>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4132,7 +4155,7 @@ (define_insn "*pred_<optab><mode>_scalar"
 	    (match_operand:VI_D 3 "register_operand"     " vr, vr, vr, vr"))
 	  (match_operand:VI_D 2 "vector_merge_operand"   " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%4%p1"
+  "%^v<insn>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4154,7 +4177,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
 	    (match_operand:VI_D 3 "register_operand"         " vr, vr, vr, vr"))
 	  (match_operand:VI_D 2 "vector_merge_operand"       " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%4%p1"
+  "%^v<insn>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4208,7 +4231,7 @@ (define_insn "*pred_<optab><mode>_scalar"
 	      (match_operand:<VEL> 4 "register_operand"  "  r,  r,  r,  r")))
 	  (match_operand:VI_D 2 "vector_merge_operand"   " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%4%p1"
+  "%^v<insn>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4230,7 +4253,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
 	        (match_operand:<VSUBEL> 4 "register_operand" "  r,  r,  r,  r"))))
 	  (match_operand:VI_D 2 "vector_merge_operand"       " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<insn>.vx\t%0,%3,%4%p1"
+  "%^v<insn>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "<int_binop_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4252,7 +4275,7 @@ (define_insn "@pred_<sat_op><mode>"
 	     (match_operand:VI 4 "register_operand"      " vr, vr, vr, vr")] VSAT_OP)
 	  (match_operand:VI 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<sat_op>.vv\t%0,%3,%4%p1"
+  "%^v<sat_op>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "<sat_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4275,7 +4298,7 @@ (define_insn "@pred_<sat_op><mode>_scalar"
 	     (match_operand:<VEL> 4 "reg_or_0_operand"   " rJ, rJ, rJ, rJ")] VSAT_ARITH_OP)
 	  (match_operand:VI_QHS 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<sat_op>.vx\t%0,%3,%z4%p1"
+  "%^v<sat_op>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<sat_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4297,7 +4320,7 @@ (define_insn "@pred_<sat_op><mode>_scalar"
 	     (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK")] VSAT_SHIFT_OP)
 	  (match_operand:VI 2 "vector_merge_operand"       " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<sat_op>.v%o4\t%0,%3,%4%p1"
+  "%^v<sat_op>.v%o4\t%0,%3,%4%p1"
   [(set_attr "type" "<sat_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4355,7 +4378,7 @@ (define_insn "*pred_<sat_op><mode>_scalar"
 	     (match_operand:<VEL> 4 "reg_or_0_operand"   " rJ, rJ, rJ, rJ")] VSAT_ARITH_OP)
 	  (match_operand:VI_D 2 "vector_merge_operand"   " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<sat_op>.vx\t%0,%3,%z4%p1"
+  "%^v<sat_op>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<sat_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4378,7 +4401,7 @@ (define_insn "*pred_<sat_op><mode>_extended_scalar"
 	       (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ"))] VSAT_ARITH_OP)
 	  (match_operand:VI_D 2 "vector_merge_operand"      " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "v<sat_op>.vx\t%0,%3,%z4%p1"
+  "%^v<sat_op>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "<sat_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -4401,7 +4424,7 @@ (define_insn "@pred_narrow_clip<v_su><mode>"
 	     (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  0, 0,  0,  0,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  0,vu,  0, vu,vu, vu,   vu,    0, vu, vu,   vu,    0")))]
   "TARGET_VECTOR"
-  "vnclip<v_su>.w%o4\t%0,%3,%v4%p1"
+  "%^vnclip<v_su>.w%o4\t%0,%3,%v4%p1"
   [(set_attr "type" "vnclip")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -4423,7 +4446,7 @@ (define_insn "@pred_narrow_clip<v_su><mode>_scalar"
 	     (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  0, vu,  0,   vu,    0")))]
   "TARGET_VECTOR"
-  "vnclip<v_su>.w%o4\t%0,%3,%4%p1"
+  "%^vnclip<v_su>.w%o4\t%0,%3,%4%p1"
   [(set_attr "type" "vnclip")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -4466,7 +4489,7 @@ (define_insn "*pred_cmp<mode>_merge_tie_mask"
 	      (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+  "%^vms%B2.v%o4\t%0,%3,%v4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -4490,7 +4513,7 @@ (define_insn "*pred_cmp<mode>"
 	      (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  "%^vms%B3.v%o5\t%0,%4,%v5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4510,7 +4533,7 @@ (define_insn "*pred_cmp<mode>_narrow"
 	      (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    0,    0, vrvi,    0,    0, vrvi, vrvi")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    0,    0,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  "%^vms%B3.v%o5\t%0,%4,%v5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4546,7 +4569,7 @@ (define_insn "*pred_ltge<mode>_merge_tie_mask"
 	      (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+  "%^vms%B2.v%o4\t%0,%3,%v4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -4570,7 +4593,7 @@ (define_insn "*pred_ltge<mode>"
 	      (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  "%^vms%B3.v%o5\t%0,%4,%v5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4590,7 +4613,7 @@ (define_insn "*pred_ltge<mode>_narrow"
 	      (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    0,    0, vrvj,    0,    0, vrvj, vrvj")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    0,    0,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  "%^vms%B3.v%o5\t%0,%4,%v5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4628,7 +4651,7 @@ (define_insn "*pred_cmp<mode>_scalar_merge_tie_mask"
 	        (match_operand:<VEL> 4 "register_operand"      "  r"))])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.vx\t%0,%3,%4,v0.t"
+  "%^vms%B2.vx\t%0,%3,%4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -4653,7 +4676,7 @@ (define_insn "*pred_cmp<mode>_scalar"
 	        (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4674,7 +4697,7 @@ (define_insn "*pred_cmp<mode>_scalar_narrow"
 	        (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])
 	  (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4712,7 +4735,7 @@ (define_insn "*pred_eqne<mode>_scalar_merge_tie_mask"
 	      (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.vx\t%0,%3,%4,v0.t"
+  "%^vms%B2.vx\t%0,%3,%4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -4737,7 +4760,7 @@ (define_insn "*pred_eqne<mode>_scalar"
 	      (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4758,7 +4781,7 @@ (define_insn "*pred_eqne<mode>_scalar_narrow"
 	      (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    0,    0,   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4853,7 +4876,7 @@ (define_insn "*pred_cmp<mode>_scalar_merge_tie_mask"
 	        (match_operand:<VEL> 4 "register_operand"       "  r"))])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.vx\t%0,%3,%4,v0.t"
+  "%^vms%B2.vx\t%0,%3,%4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -4877,7 +4900,7 @@ (define_insn "*pred_eqne<mode>_scalar_merge_tie_mask"
 	      (match_operand:V_VLSI_D 3 "register_operand"          " vr")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.vx\t%0,%3,%4,v0.t"
+  "%^vms%B2.vx\t%0,%3,%4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -4902,7 +4925,7 @@ (define_insn "*pred_cmp<mode>_scalar"
 	        (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4923,7 +4946,7 @@ (define_insn "*pred_cmp<mode>_scalar_narrow"
 	        (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])
 	  (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4944,7 +4967,7 @@ (define_insn "*pred_eqne<mode>_scalar"
 	      (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4965,7 +4988,7 @@ (define_insn "*pred_eqne<mode>_scalar_narrow"
 	      (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    0,    0,   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -4986,7 +5009,7 @@ (define_insn "*pred_cmp<mode>_extended_scalar_merge_tie_mask"
 	          (match_operand:<VSUBEL> 4 "register_operand" "  r")))])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.vx\t%0,%3,%4,v0.t"
+  "%^vms%B2.vx\t%0,%3,%4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -5012,7 +5035,7 @@ (define_insn "*pred_cmp<mode>_extended_scalar"
 	          (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])
 	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -5033,7 +5056,7 @@ (define_insn "*pred_cmp<mode>_extended_scalar_narrow"
 	          (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])
 	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -5054,7 +5077,7 @@ (define_insn "*pred_eqne<mode>_extended_scalar_merge_tie_mask"
 	      (match_operand:V_VLSI_D 3 "register_operand"           " vr")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vms%B2.vx\t%0,%3,%4,v0.t"
+  "%^vms%B2.vx\t%0,%3,%4,v0.t"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -5080,7 +5103,7 @@ (define_insn "*pred_eqne<mode>_extended_scalar"
 	      (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -5101,7 +5124,7 @@ (define_insn "*pred_eqne<mode>_extended_scalar_narrow"
 	      (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    0,    0,   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vms%B3.vx\t%0,%4,%5%p1"
+  "%^vms%B3.vx\t%0,%4,%5%p1"
   [(set_attr "type" "vicmp")
    (set_attr "mode" "<MODE>")])
 
@@ -5270,12 +5293,12 @@ (define_insn "*pred_mul_plus<mode>_undef"
 	  (match_operand:V_VLSI 2 "vector_undef_operand")))]
   "TARGET_VECTOR"
   "@
-   vmadd.vv\t%0,%4,%5%p1
-   vmacc.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%4\;vmacc.vv\t%0,%3,%4%p1
-   vmadd.vv\t%0,%4,%5%p1
-   vmacc.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%5\;vmacc.vv\t%0,%3,%4%p1"
+   %^vmadd.vv\t%0,%4,%5%p1
+   %^vmacc.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vv\t%0,%3,%4%p1
+   %^vmadd.vv\t%0,%4,%5%p1
+   %^vmacc.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%5\;%^vmacc.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")])
 
@@ -5298,10 +5321,10 @@ (define_insn "*pred_madd<mode>"
 	  (match_dup 2)))]
   "TARGET_VECTOR"
   "@
-   vmadd.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1
-   vmadd.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1"
+   %^vmadd.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vmadd.vv\t%0,%3,%4%p1
+   %^vmadd.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vmadd.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -5329,10 +5352,10 @@ (define_insn "*pred_macc<mode>"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vmacc.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1
-   vmacc.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1"
+   %^vmacc.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vv\t%0,%2,%3%p1
+   %^vmacc.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vv\t%0,%2,%3%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -5382,10 +5405,10 @@ (define_insn "*pred_madd<mode>_scalar"
 	  (match_dup 3)))]
   "TARGET_VECTOR"
   "@
-   vmadd.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1
-   vmadd.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1"
+   %^vmadd.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vmadd.vx\t%0,%2,%4%p1
+   %^vmadd.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vmadd.vx\t%0,%2,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -5414,10 +5437,10 @@ (define_insn "*pred_macc<mode>_scalar"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vmacc.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1
-   vmacc.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1"
+   %^vmacc.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1
+   %^vmacc.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -5482,10 +5505,10 @@ (define_insn "*pred_madd<mode>_extended_scalar"
 	  (match_dup 3)))]
   "TARGET_VECTOR"
   "@
-   vmadd.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1
-   vmadd.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1"
+   %^vmadd.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vmadd.vx\t%0,%2,%4%p1
+   %^vmadd.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vmadd.vx\t%0,%2,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -5515,10 +5538,10 @@ (define_insn "*pred_macc<mode>_extended_scalar"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vmacc.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1
-   vmacc.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1"
+   %^vmacc.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1
+   %^vmacc.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -5568,12 +5591,12 @@ (define_insn "*pred_minus_mul<mode>_undef"
 	  (match_operand:V_VLSI 2 "vector_undef_operand")))]
   "TARGET_VECTOR"
   "@
-   vnmsub.vv\t%0,%4,%5%p1
-   vnmsac.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1
-   vnmsub.vv\t%0,%4,%5%p1
-   vnmsac.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1"
+   %^vnmsub.vv\t%0,%4,%5%p1
+   %^vnmsac.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vnmsub.vv\t%0,%4,%5%p1
+   %^vnmsub.vv\t%0,%4,%5%p1
+   %^vnmsac.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vnmsub.vv\t%0,%4,%5%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")])
 
@@ -5596,10 +5619,10 @@ (define_insn "*pred_nmsub<mode>"
 	  (match_dup 2)))]
   "TARGET_VECTOR"
   "@
-   vnmsub.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1
-   vnmsub.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1"
+   %^vnmsub.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vnmsub.vv\t%0,%3,%4%p1
+   %^vnmsub.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vnmsub.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -5627,10 +5650,10 @@ (define_insn "*pred_nmsac<mode>"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vnmsac.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1
-   vnmsac.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1"
+   %^vnmsac.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vnmsac.vv\t%0,%2,%3%p1
+   %^vnmsac.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vnmsac.vv\t%0,%2,%3%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -5680,10 +5703,10 @@ (define_insn "*pred_nmsub<mode>_scalar"
 	  (match_dup 3)))]
   "TARGET_VECTOR"
   "@
-   vnmsub.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1
-   vnmsub.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1"
+   %^vnmsub.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1
+   %^vnmsub.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -5712,10 +5735,10 @@ (define_insn "*pred_nmsac<mode>_scalar"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vnmsac.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1
-   vnmsac.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1"
+   %^vnmsac.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1
+   %^vnmsac.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -5780,10 +5803,10 @@ (define_insn "*pred_nmsub<mode>_extended_scalar"
 	  (match_dup 3)))]
   "TARGET_VECTOR"
   "@
-   vnmsub.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1
-   vnmsub.vx\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1"
+   %^vnmsub.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1
+   %^vnmsub.vx\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -5813,10 +5836,10 @@ (define_insn "*pred_nmsac<mode>_extended_scalar"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vnmsac.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1
-   vnmsac.vx\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1"
+   %^vnmsac.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1
+   %^vnmsac.vx\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1"
   [(set_attr "type" "vimuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -5852,7 +5875,7 @@ (define_insn "@pred_widen_mul_plus<su><mode>"
 	    (match_operand:VWEXTI 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vwmacc<u>.vv\t%0,%3,%4%p1"
+  "%^vwmacc<u>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "viwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -5877,7 +5900,7 @@ (define_insn "@pred_widen_mul_plus<su><mode>_scalar"
 	    (match_operand:VWEXTI 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vwmacc<u>.vx\t%0,%3,%4%p1"
+  "%^vwmacc<u>.vx\t%0,%3,%4%p1"
   [(set_attr "type" "viwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -5901,7 +5924,7 @@ (define_insn "@pred_widen_mul_plussu<mode>"
 	    (match_operand:VWEXTI 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vwmaccsu.vv\t%0,%3,%4%p1"
+  "%^vwmaccsu.vv\t%0,%3,%4%p1"
   [(set_attr "type" "viwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -5926,7 +5949,7 @@ (define_insn "@pred_widen_mul_plussu<mode>_scalar"
 	    (match_operand:VWEXTI 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vwmaccsu.vx\t%0,%3,%4%p1"
+  "%^vwmaccsu.vx\t%0,%3,%4%p1"
   [(set_attr "type" "viwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -5951,7 +5974,7 @@ (define_insn "@pred_widen_mul_plusus<mode>_scalar"
 	    (match_operand:VWEXTI 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vwmaccus.vx\t%0,%3,%4%p1"
+  "%^vwmaccus.vx\t%0,%3,%4%p1"
   [(set_attr "type" "viwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -5986,7 +6009,7 @@ (define_insn "@pred_<optab><mode>"
 	    (match_operand:VB_VLS 4 "register_operand"               " vr"))
 	  (match_operand:VB_VLS 2 "vector_undef_operand"             " vu")))]
   "TARGET_VECTOR"
-  "vm<insn>.mm\t%0,%3,%4"
+  "%^vm<insn>.mm\t%0,%3,%4"
   [(set_attr "type" "vmalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "5")
@@ -6007,7 +6030,7 @@ (define_insn "@pred_n<optab><mode>"
 	      (match_operand:VB_VLS 4 "register_operand"             " vr")))
 	  (match_operand:VB_VLS 2 "vector_undef_operand"             " vu")))]
   "TARGET_VECTOR"
-  "vm<ninsn>.mm\t%0,%3,%4"
+  "%^vm<ninsn>.mm\t%0,%3,%4"
   [(set_attr "type" "vmalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "5")
@@ -6028,7 +6051,7 @@ (define_insn "@pred_<optab>not<mode>"
 	      (match_operand:VB_VLS 4 "register_operand"             " vr")))
 	  (match_operand:VB_VLS 2 "vector_undef_operand"             " vu")))]
   "TARGET_VECTOR"
-  "vm<insn>n.mm\t%0,%3,%4"
+  "%^vm<insn>n.mm\t%0,%3,%4"
   [(set_attr "type" "vmalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "5")
@@ -6047,7 +6070,7 @@ (define_insn "@pred_not<mode>"
 	    (match_operand:VB_VLS 3 "register_operand"               " vr"))
 	  (match_operand:VB_VLS 2 "vector_undef_operand"             " vu")))]
   "TARGET_VECTOR"
-  "vmnot.m\t%0,%3"
+  "%^vmnot.m\t%0,%3"
   [(set_attr "type" "vmalu")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -6065,7 +6088,7 @@ (define_insn "@pred_popcount<VB:mode><P:mode>"
 	     (reg:SI VL_REGNUM)
 	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
   "TARGET_VECTOR"
-  "vcpop.m\t%0,%2%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vmpopc.m\t%0,%2%p1" : "vcpop.m\t%0,%2%p1"; }
   [(set_attr "type" "vmpop")
    (set_attr "mode" "<VB:MODE>")])
 
@@ -6083,7 +6106,7 @@ (define_insn "@pred_ffs<VB:mode><P:mode>"
 	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
 	  (const_int -1)))]
   "TARGET_VECTOR"
-  "vfirst.m\t%0,%2%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vmfirst.m\t%0,%2%p1" : "vfirst.m\t%0,%2%p1"; }
   [(set_attr "type" "vmffs")
    (set_attr "mode" "<VB:MODE>")])
 
@@ -6101,7 +6124,7 @@ (define_insn "@pred_<misc_op><mode>"
 	    [(match_operand:VB 3 "register_operand"    "   vr,   vr")] VMISC)
 	  (match_operand:VB 2 "vector_merge_operand"   "   vu,    0")))]
   "TARGET_VECTOR"
-  "vm<misc_op>.m\t%0,%3%p1"
+  "%^vm<misc_op>.m\t%0,%3%p1"
   [(set_attr "type" "vmsfs")
    (set_attr "mode" "<MODE>")])
 
@@ -6120,7 +6143,7 @@ (define_insn "@pred_iota<mode>"
 	    [(match_operand:<VM> 3 "register_operand"    "   vr,   vr")] UNSPEC_VIOTA)
 	  (match_operand:VI 2 "vector_merge_operand"     "   vu,    0")))]
   "TARGET_VECTOR"
-  "viota.m\t%0,%3%p1"
+  "%^viota.m\t%0,%3%p1"
   [(set_attr "type" "vmiota")
    (set_attr "mode" "<MODE>")])
 
@@ -6138,7 +6161,7 @@ (define_insn "@pred_series<mode>"
 	  (vec_series:V_VLSI (const_int 0) (const_int 1))
 	  (match_operand:V_VLSI 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vid.v\t%0%p1"
+  "%^vid.v\t%0%p1"
   [(set_attr "type" "vmidx")
    (set_attr "mode" "<MODE>")])
 
@@ -6170,7 +6193,7 @@ (define_insn "@pred_<optab><mode>"
 	    (match_operand:V_VLSF 4 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.vv\t%0,%3,%4%p1"
+  "%^vf<insn>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6192,7 +6215,7 @@ (define_insn "@pred_<optab><mode>"
 	    (match_operand:V_VLSF 4 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.vv\t%0,%3,%4%p1"
+  "%^vf<insn>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -6236,7 +6259,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	    (match_operand:VF 3 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.vf\t%0,%3,%4%p1"
+  "%^vf<insn>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6259,7 +6282,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	    (match_operand:VF 3 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.vf\t%0,%3,%4%p1"
+  "%^vf<insn>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -6304,7 +6327,7 @@ (define_insn "@pred_<optab><mode>_scalar"
 	      (match_operand:<VEL> 4 "register_operand"  "  f,  f,  f,  f")))
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.vf\t%0,%3,%4%p1"
+  "%^vf<insn>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6329,7 +6352,7 @@ (define_insn "@pred_<optab><mode>_reverse_scalar"
 	    (match_operand:VF 3 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfr<insn>.vf\t%0,%3,%4%p1"
+  "%^vfr<insn>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6351,7 +6374,7 @@ (define_insn "@pred_<copysign><mode>"
 	     (match_operand:V_VLSF 4 "register_operand"  " vr, vr, vr, vr")] VCOPYSIGNS)
 	  (match_operand:V_VLSF 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfsgnj<nx>.vv\t%0,%3,%4%p1"
+  "%^vfsgnj<nx>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vfsgnj")
    (set_attr "mode" "<MODE>")])
 
@@ -6372,7 +6395,7 @@ (define_insn "@pred_ncopysign<mode>"
 	       (match_operand:VF 4 "register_operand"       " vr, vr, vr, vr")] UNSPEC_VCOPYSIGN))
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfsgnjn.vv\t%0,%3,%4%p1"
+  "%^vfsgnjn.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vfsgnj")
    (set_attr "mode" "<MODE>")])
 
@@ -6393,7 +6416,7 @@ (define_insn "@pred_<copysign><mode>_scalar"
 	       (match_operand:<VEL> 4 "register_operand" "  f,  f,  f,  f"))] VCOPYSIGNS)
 	  (match_operand:V_VLSF 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfsgnj<nx>.vf\t%0,%3,%4%p1"
+  "%^vfsgnj<nx>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "vfsgnj")
    (set_attr "mode" "<MODE>")])
 
@@ -6415,7 +6438,7 @@ (define_insn "@pred_ncopysign<mode>_scalar"
 		 (match_operand:<VEL> 4 "register_operand" "  f,  f,  f,  f"))] UNSPEC_VCOPYSIGN))
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfsgnjn.vf\t%0,%3,%4%p1"
+  "%^vfsgnjn.vf\t%0,%3,%4%p1"
   [(set_attr "type" "vfsgnj")
    (set_attr "mode" "<MODE>")])
 
@@ -6471,12 +6494,12 @@ (define_insn "*pred_mul_<optab><mode>_undef"
 	  (match_operand:V_VLSF 2 "vector_undef_operand")))]
   "TARGET_VECTOR"
   "@
-   vf<madd_msub>.vv\t%0,%4,%5%p1
-   vf<macc_msac>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%3\;vf<madd_msub>.vv\t%0,%4,%5%p1
-   vf<madd_msub>.vv\t%0,%4,%5%p1
-   vf<macc_msac>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%3\;vf<madd_msub>.vv\t%0,%4,%5%p1"
+   %^vf<madd_msub>.vv\t%0,%4,%5%p1
+   %^vf<macc_msac>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vv\t%0,%4,%5%p1
+   %^vf<madd_msub>.vv\t%0,%4,%5%p1
+   %^vf<macc_msac>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vv\t%0,%4,%5%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6503,10 +6526,10 @@ (define_insn "*pred_<madd_msub><mode>"
 	  (match_dup 2)))]
   "TARGET_VECTOR"
   "@
-   vf<madd_msub>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vf<madd_msub>.vv\t%0,%3,%4%p1
-   vf<madd_msub>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vf<madd_msub>.vv\t%0,%3,%4%p1"
+   %^vf<madd_msub>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vf<madd_msub>.vv\t%0,%3,%4%p1
+   %^vf<madd_msub>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vf<madd_msub>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -6538,10 +6561,10 @@ (define_insn "*pred_<macc_msac><mode>"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vf<macc_msac>.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<macc_msac>.vv\t%0,%2,%3%p1
-   vf<macc_msac>.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<macc_msac>.vv\t%0,%2,%3%p1"
+   %^vf<macc_msac>.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vv\t%0,%2,%3%p1
+   %^vf<macc_msac>.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vv\t%0,%2,%3%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -6597,10 +6620,10 @@ (define_insn "*pred_<madd_msub><mode>_scalar"
 	  (match_dup 3)))]
   "TARGET_VECTOR"
   "@
-   vf<madd_msub>.vf\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vf<madd_msub>.vf\t%0,%2,%4%p1
-   vf<madd_msub>.vf\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vf<madd_msub>.vf\t%0,%2,%4%p1"
+   %^vf<madd_msub>.vf\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vf\t%0,%2,%4%p1
+   %^vf<madd_msub>.vf\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vf\t%0,%2,%4%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -6633,10 +6656,10 @@ (define_insn "*pred_<macc_msac><mode>_scalar"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vf<macc_msac>.vf\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<macc_msac>.vf\t%0,%2,%3%p1
-   vf<macc_msac>.vf\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<macc_msac>.vf\t%0,%2,%3%p1"
+   %^vf<macc_msac>.vf\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vf\t%0,%2,%3%p1
+   %^vf<macc_msac>.vf\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vf\t%0,%2,%3%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -6694,12 +6717,12 @@ (define_insn "*pred_mul_neg_<optab><mode>_undef"
 	  (match_operand:V_VLSF 2 "vector_undef_operand")))]
   "TARGET_VECTOR"
   "@
-   vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
-   vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
-   vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
-   vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vv\t%0,%4,%5%p1"
+   %^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
+   %^vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
+   %^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
+   %^vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6727,10 +6750,10 @@ (define_insn "*pred_<nmsub_nmadd><mode>"
 	  (match_dup 2)))]
   "TARGET_VECTOR"
   "@
-   vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
-   vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
-   vmv.v.v\t%0,%2\;vf<nmsub_nmadd>.vv\t%0,%3,%4%p1"
+   %^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
+   %^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
+   %^vmv.v.v\t%0,%2\;%^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -6763,10 +6786,10 @@ (define_insn "*pred_<nmsac_nmacc><mode>"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
-   vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vv\t%0,%2,%3%p1"
+   %^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
+   %^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -6824,10 +6847,10 @@ (define_insn "*pred_<nmsub_nmadd><mode>_scalar"
 	  (match_dup 3)))]
   "TARGET_VECTOR"
   "@
-   vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
-   vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
-   vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vf\t%0,%2,%4%p1"
+   %^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
+   %^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
+   %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "4")
@@ -6861,10 +6884,10 @@ (define_insn "*pred_<nmsac_nmacc><mode>_scalar"
 	  (match_dup 4)))]
   "TARGET_VECTOR"
   "@
-   vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
-   vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
-   vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vf\t%0,%2,%3%p1"
+   %^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
+   %^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
+   %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1"
   [(set_attr "type" "vfmuladd")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "2")
@@ -6903,7 +6926,7 @@ (define_insn "@pred_<optab><mode>"
 	    (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.v\t%0,%3%p1"
+  "%^vf<insn>.v\t%0,%3%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -6928,7 +6951,7 @@ (define_insn "@pred_<optab><mode>"
 	    (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
 	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<insn>.v\t%0,%3%p1"
+  "%^vf<insn>.v\t%0,%3%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")
    (set_attr "vl_op_idx" "4")
@@ -6951,7 +6974,7 @@ (define_insn "@pred_<misc_op><mode>"
 	    [(match_operand:VF 3 "register_operand"       " vr, vr, vr, vr")] VFMISC)
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<misc_op>.v\t%0,%3%p1"
+  "%^vf<misc_op>.v\t%0,%3%p1"
   [(set_attr "type" "<float_insn_type>")
    (set_attr "mode" "<MODE>")])
 
@@ -6972,7 +6995,7 @@ (define_insn "@pred_<misc_frm_op><mode>"
 	    [(match_operand:VF 3 "register_operand"       " vr, vr, vr, vr")] VFMISC_FRM)
 	  (match_operand:VF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vf<misc_frm_op>.v\t%0,%3%p1"
+  "%^vf<misc_frm_op>.v\t%0,%3%p1"
   [(set_attr "type" "<float_frm_insn_type>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -6993,7 +7016,7 @@ (define_insn "@pred_class<mode>"
 	    [(match_operand:VF 3 "register_operand"          " vr, vr, vr, vr")] UNSPEC_VFCLASS)
 	  (match_operand:<VCONVERT> 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfclass.v\t%0,%3%p1"
+  "%^vfclass.v\t%0,%3%p1"
   [(set_attr "type" "vfclass")
    (set_attr "mode" "<MODE>")])
 
@@ -7026,7 +7049,7 @@ (define_insn "@pred_dual_widen_<optab><mode>"
 	      (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")))
 	  (match_operand:VWEXTF 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfw<insn>.vv\t%0,%3,%4%p1"
+  "%^vfw<insn>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vf<widen_binop_insn_type>")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7053,7 +7076,7 @@ (define_insn "@pred_dual_widen_<optab><mode>_scalar"
 		(match_operand:<VSUBEL> 4 "register_operand"       "    f,    f"))))
 	  (match_operand:VWEXTF 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfw<insn>.vf\t%0,%3,%4%p1"
+  "%^vfw<insn>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "vf<widen_binop_insn_type>")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7078,7 +7101,7 @@ (define_insn "@pred_single_widen_add<mode>"
 	    (match_operand:VWEXTF 3 "register_operand"             "   vr,   vr"))
 	  (match_operand:VWEXTF 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfwadd.wv\t%0,%3,%4%p1"
+  "%^vfwadd.wv\t%0,%3,%4%p1"
   [(set_attr "type" "vfwalu")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7103,7 +7126,7 @@ (define_insn "@pred_single_widen_sub<mode>"
 	      (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")))
 	  (match_operand:VWEXTF 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfwsub.wv\t%0,%3,%4%p1"
+  "%^vfwsub.wv\t%0,%3,%4%p1"
   [(set_attr "type" "vfwalu")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7129,7 +7152,7 @@ (define_insn "@pred_single_widen_<plus_minus:optab><mode>_scalar"
 		(match_operand:<VSUBEL> 4 "register_operand"       "    f,    f"))))
 	  (match_operand:VWEXTF 2 "vector_merge_operand"           "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfw<insn>.wf\t%0,%3,%4%p1"
+  "%^vfw<insn>.wf\t%0,%3,%4%p1"
   [(set_attr "type" "vf<widen_binop_insn_type>")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7164,7 +7187,7 @@ (define_insn "@pred_widen_mul_<optab><mode>"
 	    (match_operand:VWEXTF 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vfw<macc_msac>.vv\t%0,%3,%4%p1"
+  "%^vfw<macc_msac>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vfwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7193,7 +7216,7 @@ (define_insn "@pred_widen_mul_<optab><mode>_scalar"
 	    (match_operand:VWEXTF 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vfw<macc_msac>.vf\t%0,%3,%4%p1"
+  "%^vfw<macc_msac>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "vfwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7222,7 +7245,7 @@ (define_insn "@pred_widen_mul_neg_<optab><mode>"
 	      (match_operand:VWEXTF 2 "register_operand"               "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vfw<nmsac_nmacc>.vv\t%0,%3,%4%p1"
+  "%^vfw<nmsac_nmacc>.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vfwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7252,7 +7275,7 @@ (define_insn "@pred_widen_mul_neg_<optab><mode>_scalar"
 	    (match_operand:VWEXTF 2 "register_operand"                 "    0"))
 	  (match_dup 2)))]
   "TARGET_VECTOR"
-  "vfw<nmsac_nmacc>.vf\t%0,%3,%4%p1"
+  "%^vfw<nmsac_nmacc>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "vfwmuladd")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7298,7 +7321,7 @@ (define_insn "*pred_cmp<mode>"
 	      (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vmf%B3.vv\t%0,%4,%5%p1"
+  "%^vmf%B3.vv\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")])
 
@@ -7317,7 +7340,7 @@ (define_insn "*pred_cmp<mode>_narrow_merge_tie_mask"
 	      (match_operand:V_VLSF 4 "register_operand"           " vr")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vmf%B2.vv\t%0,%3,%4,v0.t"
+  "%^vmf%B2.vv\t%0,%3,%4,v0.t"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -7341,7 +7364,7 @@ (define_insn "*pred_cmp<mode>_narrow"
 	      (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    0,    0,   vr,    0,    0,   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    0,    0,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vmf%B3.vv\t%0,%4,%5%p1"
+  "%^vmf%B3.vv\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")])
 
@@ -7379,7 +7402,7 @@ (define_insn "*pred_cmp<mode>_scalar_merge_tie_mask"
 	        (match_operand:<VEL> 4 "register_operand"     "  f"))])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vmf%B2.vf\t%0,%3,%4,v0.t"
+  "%^vmf%B2.vf\t%0,%3,%4,v0.t"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -7404,7 +7427,7 @@ (define_insn "*pred_cmp<mode>_scalar"
 	        (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vmf%B3.vf\t%0,%4,%5%p1"
+  "%^vmf%B3.vf\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")])
 
@@ -7425,7 +7448,7 @@ (define_insn "*pred_cmp<mode>_scalar_narrow"
 	        (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vmf%B3.vf\t%0,%4,%5%p1"
+  "%^vmf%B3.vf\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")])
 
@@ -7463,7 +7486,7 @@ (define_insn "*pred_eqne<mode>_scalar_merge_tie_mask"
 	      (match_operand:V_VLSF 3 "register_operand"      " vr")])
 	  (match_dup 1)))]
   "TARGET_VECTOR"
-  "vmf%B2.vf\t%0,%3,%4,v0.t"
+  "%^vmf%B2.vf\t%0,%3,%4,v0.t"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")
    (set_attr "merge_op_idx" "1")
@@ -7488,7 +7511,7 @@ (define_insn "*pred_eqne<mode>_scalar"
 	      (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
-  "vmf%B3.vf\t%0,%4,%5%p1"
+  "%^vmf%B3.vf\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")])
 
@@ -7509,7 +7532,7 @@ (define_insn "*pred_eqne<mode>_scalar_narrow"
 	      (match_operand:V_VLSF 4 "register_operand"      "   vr,    0,    0,   vr,   vr")])
 	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    0,   vu,    0")))]
   "TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
-  "vmf%B3.vf\t%0,%4,%5%p1"
+  "%^vmf%B3.vf\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
    (set_attr "mode" "<MODE>")])
 
@@ -7536,7 +7559,7 @@ (define_insn "@pred_merge<mode>_scalar"
 	(match_operand:<VM> 4 "register_operand"    " vm,vm"))
       (match_operand:V_VLSF 1 "vector_merge_operand"    " vu, 0")))]
   "TARGET_VECTOR"
-  "vfmerge.vfm\t%0,%2,%3,%4"
+  "%^vfmerge.vfm\t%0,%2,%3,%4"
   [(set_attr "type" "vfmerge")
    (set_attr "mode" "<MODE>")])
 
@@ -7564,7 +7587,7 @@ (define_insn "@pred_fcvt_x<v_su>_f<mode>"
 	     [(match_operand:V_VLSF 3 "register_operand"     " vr, vr, vr, vr")] VFCVTS)
 	  (match_operand:<VCONVERT> 2 "vector_merge_operand" " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfcvt.x<v_su>.f.v\t%0,%3%p1"
+  "%^vfcvt.x<v_su>.f.v\t%0,%3%p1"
   [(set_attr "type" "vfcvtftoi")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -7584,7 +7607,7 @@ (define_insn "@pred_<fix_cvt><mode>"
 	  (any_fix:<VCONVERT>
 	     (match_operand:V_VLSF 3 "register_operand"          " vr, vr, vr, vr"))
 	  (match_operand:<VCONVERT> 2 "vector_merge_operand" " vu,  0, vu,  0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vfcvt.rtz.x<u>.f.v\t%0,%3%p1"
   [(set_attr "type" "vfcvtftoi")
    (set_attr "mode" "<MODE>")])
@@ -7606,7 +7629,7 @@ (define_insn "@pred_<float_cvt><mode>"
 	     (match_operand:<VCONVERT> 3 "register_operand" " vr, vr, vr, vr"))
 	  (match_operand:V_VLSF 2 "vector_merge_operand"        " vu,  0, vu,  0")))]
   "TARGET_VECTOR"
-  "vfcvt.f.x<u>.v\t%0,%3%p1"
+  "%^vfcvt.f.x<u>.v\t%0,%3%p1"
   [(set_attr "type" "vfcvtitof")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -7636,7 +7659,7 @@ (define_insn "@pred_widen_fcvt_x<v_su>_f<mode>"
 	     [(match_operand:<VNCONVERT> 3 "register_operand" "   vr,   vr")] VFCVTS)
 	  (match_operand:VWCONVERTI 2 "vector_merge_operand"  "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfwcvt.x<v_su>.f.v\t%0,%3%p1"
+  "%^vfwcvt.x<v_su>.f.v\t%0,%3%p1"
   [(set_attr "type" "vfwcvtftoi")
    (set_attr "mode" "<VNCONVERT>")
    (set (attr "frm_mode")
@@ -7656,7 +7679,7 @@ (define_insn "@pred_widen_<fix_cvt><mode>"
 	  (any_fix:VWCONVERTI
 	     (match_operand:<VNCONVERT> 3 "register_operand" "   vr,   vr"))
 	  (match_operand:VWCONVERTI 2 "vector_merge_operand" "   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vfwcvt.rtz.x<u>.f.v\t%0,%3%p1"
   [(set_attr "type" "vfwcvtftoi")
    (set_attr "mode" "<VNCONVERT>")])
@@ -7676,7 +7699,7 @@ (define_insn "@pred_widen_<float_cvt><mode>"
 	     (match_operand:<VNCONVERT> 3 "register_operand" "   vr,   vr"))
 	  (match_operand:V_VLSF 2 "vector_merge_operand"         "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfwcvt.f.x<u>.v\t%0,%3%p1"
+  "%^vfwcvt.f.x<u>.v\t%0,%3%p1"
   [(set_attr "type" "vfwcvtitof")
    (set_attr "mode" "<VNCONVERT>")])
 
@@ -7695,7 +7718,7 @@ (define_insn "@pred_extend<mode>"
 	     (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "   vr,   vr"))
 	  (match_operand:VWEXTF_ZVFHMIN 2 "vector_merge_operand"          "   vu,    0")))]
   "TARGET_VECTOR"
-  "vfwcvt.f.f.v\t%0,%3%p1"
+  "%^vfwcvt.f.f.v\t%0,%3%p1"
   [(set_attr "type" "vfwcvtftof")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
 
@@ -7723,7 +7746,7 @@ (define_insn "@pred_narrow_fcvt_x<v_su>_f<mode>"
 	     [(match_operand:V_VLSF 3 "register_operand"       "  0,  0,  0,  0,   vr,   vr")] VFCVTS)
 	  (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  0, vu,  0,   vu,    0")))]
   "TARGET_VECTOR"
-  "vfncvt.x<v_su>.f.w\t%0,%3%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vfncvt.x<v_su>.f.v\t%0,%3%p1" : "vfncvt.x<v_su>.f.w\t%0,%3%p1"; }
   [(set_attr "type" "vfncvtftoi")
    (set_attr "mode" "<VNCONVERT>")
    (set (attr "frm_mode")
@@ -7743,7 +7766,7 @@ (define_insn "@pred_narrow_<fix_cvt><mode>"
 	  (any_fix:<VNCONVERT>
 	     (match_operand:V_VLSF 3 "register_operand"           "  0,  0,  0,  0,   vr,   vr"))
 	  (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  0, vu,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vfncvt.rtz.x<u>.f.w\t%0,%3%p1"
   [(set_attr "type" "vfncvtftoi")
    (set_attr "mode" "<VNCONVERT>")])
@@ -7765,7 +7788,7 @@ (define_insn "@pred_narrow_<float_cvt><mode>"
 	     (match_operand:VWCONVERTI 3 "register_operand"   "  0,  0,  0,  0,   vr,   vr"))
 	  (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  0, vu,  0,   vu,    0")))]
   "TARGET_VECTOR"
-  "vfncvt.f.x<u>.w\t%0,%3%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vfncvt.f.x<u>.v\t%0,%3%p1" : "vfncvt.f.x<u>.w\t%0,%3%p1"; }
   [(set_attr "type" "vfncvtitof")
    (set_attr "mode" "<VNCONVERT>")
    (set (attr "frm_mode")
@@ -7788,7 +7811,7 @@ (define_insn "@pred_trunc<mode>"
 	     (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  0,  0,  0,  0,   vr,   vr"))
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  0, vu,  0,   vu,    0")))]
   "TARGET_VECTOR"
-  "vfncvt.f.f.w\t%0,%3%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vfncvt.f.f.v\t%0,%3%p1" : "vfncvt.f.f.w\t%0,%3%p1"; }
   [(set_attr "type" "vfncvtftof")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")
    (set (attr "frm_mode")
@@ -7809,7 +7832,7 @@ (define_insn "@pred_rod_trunc<mode>"
 	    [(float_truncate:<V_DOUBLE_TRUNC>
 	       (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"          "  0,  0,  0,  0,   vr,   vr"))] UNSPEC_ROD)
 	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  0, vu,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vfncvt.rod.f.f.w\t%0,%3%p1"
   [(set_attr "type" "vfncvtftof")
    (set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -7841,7 +7864,7 @@ (define_insn "@pred_<reduc_op><mode>"
            ] ANY_REDUC)
 	   (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
   "TARGET_VECTOR"
-  "v<reduc_op>.vs\t%0,%3,%4%p1"
+  "%^v<reduc_op>.vs\t%0,%3,%4%p1"
   [(set_attr "type" "vired")
    (set_attr "mode" "<MODE>")])
 
@@ -7862,7 +7885,7 @@ (define_insn "@pred_<reduc_op><mode>"
            ] ANY_WREDUC)
 	   (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
   "TARGET_VECTOR"
-  "v<reduc_op>.vs\t%0,%3,%4%p1"
+  "%^v<reduc_op>.vs\t%0,%3,%4%p1"
   [(set_attr "type" "viwred")
    (set_attr "mode" "<MODE>")])
 
@@ -7883,7 +7906,7 @@ (define_insn "@pred_<reduc_op><mode>"
            ] ANY_FREDUC)
 	   (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
   "TARGET_VECTOR"
-  "vf<reduc_op>.vs\t%0,%3,%4%p1"
+  "%^vf<reduc_op>.vs\t%0,%3,%4%p1"
   [(set_attr "type" "vfredu")
    (set_attr "mode" "<MODE>")])
 
@@ -7906,7 +7929,7 @@ (define_insn "@pred_<reduc_op><mode>"
            ] ANY_FREDUC_SUM)
 	   (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
   "TARGET_VECTOR"
-  "vf<reduc_op>.vs\t%0,%3,%4%p1"
+  "%^vf<reduc_op>.vs\t%0,%3,%4%p1"
   [(set_attr "type" "vfred<order>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -7931,7 +7954,7 @@ (define_insn "@pred_<reduc_op><mode>"
            ] ANY_FWREDUC_SUM)
 	   (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
   "TARGET_VECTOR"
-  "vf<reduc_op>.vs\t%0,%3,%4%p1"
+  "%^vf<reduc_op>.vs\t%0,%3,%4%p1"
   [(set_attr "type" "vfwred<order>")
    (set_attr "mode" "<MODE>")
    (set (attr "frm_mode")
@@ -7973,7 +7996,7 @@ (define_insn_and_split "*pred_extract_first<mode>"
 	     (parallel [(const_int 0)]))
 	   (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
   "TARGET_VECTOR"
-  "vmv.x.s\t%0,%1"
+  "%^vmv.x.s\t%0,%1"
   "known_gt (GET_MODE_BITSIZE (<VEL>mode), GET_MODE_BITSIZE (Pmode))"
   [(const_int 0)]
 {
@@ -8007,7 +8030,7 @@ (define_insn "@pred_extract_first_trunc<mode>"
 	       (parallel [(const_int 0)]))
 	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
   "TARGET_VECTOR"
-  "vmv.x.s\t%0,%1"
+  "%^vmv.x.s\t%0,%1"
   [(set_attr "type" "vimovvx")
    (set_attr "mode" "<MODE>")])
 
@@ -8036,7 +8059,7 @@ (define_insn "*pred_extract_first<mode>"
 	     (parallel [(const_int 0)]))
 	   (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
   "TARGET_VECTOR"
-  "vfmv.f.s\t%0,%1"
+  "%^vfmv.f.s\t%0,%1"
   [(set_attr "type" "vfmovvf")
    (set_attr "mode" "<MODE>")])
 
@@ -8056,7 +8079,7 @@ (define_insn "@pred_slide<ud><mode>"
 	   (match_operand:V_VLS 3 "register_operand"          " vr, vr, vr, vr")
 	   (match_operand 4 "pmode_reg_or_uimm5_operand"  " rK, rK, rK, rK")] VSLIDES))]
   "TARGET_VECTOR"
-  "vslide<ud>.v%o4\t%0,%3,%4%p1"
+  "%^vslide<ud>.v%o4\t%0,%3,%4%p1"
   [(set_attr "type" "vslide<ud>")
    (set_attr "mode" "<MODE>")])
 
@@ -8076,7 +8099,7 @@ (define_insn "@pred_slide<ud><mode>"
 	   (match_operand:V_VLSI_QHS 3 "register_operand"     " vr, vr, vr, vr")
 	   (match_operand:<VEL> 4 "reg_or_0_operand"      " rJ, rJ, rJ, rJ")] VSLIDES1))]
   "TARGET_VECTOR"
-  "vslide<ud>.vx\t%0,%3,%z4%p1"
+  "%^vslide<ud>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vislide<ud>")
    (set_attr "mode" "<MODE>")])
 
@@ -8117,7 +8140,7 @@ (define_insn "*pred_slide<ud><mode>"
 	   (match_operand:V_VLSI_D 3 "register_operand"       " vr, vr, vr, vr")
 	   (match_operand:<VEL> 4 "reg_or_0_operand"      " rJ, rJ, rJ, rJ")] VSLIDES1))]
   "TARGET_VECTOR"
-  "vslide<ud>.vx\t%0,%3,%z4%p1"
+  "%^vslide<ud>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vislide<ud>")
    (set_attr "mode" "<MODE>")])
 
@@ -8137,7 +8160,7 @@ (define_insn "*pred_slide<ud><mode>_extended"
 	   (sign_extend:<VEL>
 	     (match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ"))] VSLIDES1))]
   "TARGET_VECTOR"
-  "vslide<ud>.vx\t%0,%3,%z4%p1"
+  "%^vslide<ud>.vx\t%0,%3,%z4%p1"
   [(set_attr "type" "vislide<ud>")
    (set_attr "mode" "<MODE>")])
 
@@ -8157,7 +8180,7 @@ (define_insn "@pred_slide<ud><mode>"
 	   (match_operand:V_VLSF 3 "register_operand"     " vr, vr, vr, vr")
 	   (match_operand:<VEL> 4 "register_operand"      "  f,  f,  f,  f")] VFSLIDES1))]
   "TARGET_VECTOR"
-  "vfslide<ud>.vf\t%0,%3,%4%p1"
+  "%^vfslide<ud>.vf\t%0,%3,%4%p1"
   [(set_attr "type" "vfslide<ud>")
    (set_attr "mode" "<MODE>")])
 
@@ -8178,7 +8201,7 @@ (define_insn "@pred_gather<mode>"
 	     (match_operand:<VINDEX> 4 "register_operand" "   vr,   vr")] UNSPEC_VRGATHER)
 	  (match_operand:V_VLS 2 "vector_merge_operand"       "   vu,    0")))]
   "TARGET_VECTOR"
-  "vrgather.vv\t%0,%3,%4%p1"
+  "%^vrgather.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vgather")
    (set_attr "mode" "<MODE>")])
 
@@ -8198,7 +8221,7 @@ (define_insn "@pred_gather<mode>_scalar"
 	     (match_operand 4 "pmode_reg_or_uimm5_operand" "   rK,   rK")] UNSPEC_VRGATHER)
 	  (match_operand:V_VLS 2 "vector_merge_operand"        "   vu,    0")))]
   "TARGET_VECTOR"
-  "vrgather.v%o4\t%0,%3,%4%p1"
+  "%^vrgather.v%o4\t%0,%3,%4%p1"
   [(set_attr "type" "vgather")
    (set_attr "mode" "<MODE>")])
 
@@ -8219,7 +8242,7 @@ (define_insn "@pred_gatherei16<mode>"
 	     (match_operand:<VINDEXEI16> 4 "register_operand" "   vr,   vr")] UNSPEC_VRGATHEREI16)
 	  (match_operand:VEI16 2 "vector_merge_operand"       "   vu,    0")))]
   "TARGET_VECTOR"
-  "vrgatherei16.vv\t%0,%3,%4%p1"
+  "%^vrgatherei16.vv\t%0,%3,%4%p1"
   [(set_attr "type" "vgather")
    (set_attr "mode" "<MODE>")])
 
@@ -8237,7 +8260,7 @@ (define_insn "@pred_compress<mode>"
 	   (match_operand:V_VLS 2 "register_operand"         "  vr,  vr")
 	   (match_operand:V_VLS 1 "vector_merge_operand"     "  vu,   0")] UNSPEC_VCOMPRESS))]
   "TARGET_VECTOR"
-  "vcompress.vm\t%0,%2,%3"
+  "%^vcompress.vm\t%0,%2,%3"
   [(set_attr "type" "vcompress")
    (set_attr "mode" "<MODE>")])
 
@@ -8288,7 +8311,7 @@ (define_insn "@pred_fault_load<mode>"
 	       (unspec:V [(match_dup 3)] UNSPEC_VLEFF)
 	       (match_dup 2))] UNSPEC_MODIFY_VL))]
   "TARGET_VECTOR"
-  "vle<sew>ff.v\t%0,%3%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vleff.v\t%0,%3%p1" : "vle<sew>ff.v\t%0,%3%p1"; }
   [(set_attr "type" "vldff")
    (set_attr "mode" "<MODE>")])
 
@@ -8318,7 +8341,7 @@ (define_insn "@pred_unit_strided_load<mode>"
 	     (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
 	  (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]
   "TARGET_VECTOR"
-  "vlseg<nf>e<sew>.v\t%0,(%z3)%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlseg<nf>e.v\t%0,(%z3)%p1" : "vlseg<nf>e<sew>.v\t%0,(%z3)%p1"; }
   [(set_attr "type" "vlsegde")
    (set_attr "mode" "<MODE>")])
 
@@ -8335,7 +8358,7 @@ (define_insn "@pred_unit_strided_store<mode>"
 	   (match_operand:VT 2 "register_operand"         "   vr")
 	   (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
   "TARGET_VECTOR"
-  "vsseg<nf>e<sew>.v\t%2,(%z1)%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsseg<nf>e.v\t%2,(%z1)%p0" : "vsseg<nf>e<sew>.v\t%2,(%z1)%p0"; }
   [(set_attr "type" "vssegte")
    (set_attr "mode" "<MODE>")])
 
@@ -8356,7 +8379,7 @@ (define_insn "@pred_strided_load<mode>"
 	     (mem:BLK (scratch))] UNSPEC_STRIDED)
 	  (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]
   "TARGET_VECTOR"
-  "vlsseg<nf>e<sew>.v\t%0,(%z3),%z4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlsseg<nf>e.v\t%0,(%z3),%z4%p1" : "vlsseg<nf>e<sew>.v\t%0,(%z3),%z4%p1"; }
   [(set_attr "type" "vlsegds")
    (set_attr "mode" "<MODE>")])
 
@@ -8374,7 +8397,7 @@ (define_insn "@pred_strided_store<mode>"
 	   (match_operand:VT 3 "register_operand"         "   vr")
 	   (mem:BLK (scratch))] UNSPEC_STRIDED))]
   "TARGET_VECTOR"
-  "vssseg<nf>e<sew>.v\t%3,(%z1),%z2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vssseg<nf>e.v\t%3,(%z1),%z2%p0" : "vssseg<nf>e<sew>.v\t%3,(%z1),%z2%p0"; }
   [(set_attr "type" "vssegts")
    (set_attr "mode" "<MODE>")])
 
@@ -8405,7 +8428,7 @@ (define_insn "@pred_fault_load<mode>"
 	        [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
 	     (match_dup 2))] UNSPEC_MODIFY_VL))]
   "TARGET_VECTOR"
-  "vlseg<nf>e<sew>ff.v\t%0,(%z3)%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlseg<nf>eff.v\t%0,(%z3)%p1" : "vlseg<nf>e<sew>ff.v\t%0,(%z3)%p1"; }
   [(set_attr "type" "vlsegdff")
    (set_attr "mode" "<MODE>")])
 
@@ -8426,7 +8449,7 @@ (define_insn "@pred_indexed_<order>load<V1T:mode><RATIO64I:mode>"
 	     (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)
 	  (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]
   "TARGET_VECTOR"
-  "vl<order>xseg<nf>ei<RATIO64I:sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO64I:sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vlsegd<order>x")
    (set_attr "mode" "<V1T:MODE>")])
 
@@ -8447,7 +8470,7 @@ (define_insn "@pred_indexed_<order>load<V2T:mode><RATIO32I:mode>"
 	     (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)
 	  (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]
   "TARGET_VECTOR"
-  "vl<order>xseg<nf>ei<RATIO32I:sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO32I:sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vlsegd<order>x")
    (set_attr "mode" "<V2T:MODE>")])
 
@@ -8468,7 +8491,7 @@ (define_insn "@pred_indexed_<order>load<V4T:mode><RATIO16I:mode>"
 	     (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)
 	  (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]
   "TARGET_VECTOR"
-  "vl<order>xseg<nf>ei<RATIO16I:sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO16I:sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vlsegd<order>x")
    (set_attr "mode" "<V4T:MODE>")])
 
@@ -8489,7 +8512,7 @@ (define_insn "@pred_indexed_<order>load<V8T:mode><RATIO8I:mode>"
 	     (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)
 	  (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]
   "TARGET_VECTOR"
-  "vl<order>xseg<nf>ei<RATIO8I:sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO8I:sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vlsegd<order>x")
    (set_attr "mode" "<V8T:MODE>")])
 
@@ -8510,7 +8533,7 @@ (define_insn "@pred_indexed_<order>load<V16T:mode><RATIO4I:mode>"
 	     (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)
 	  (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]
   "TARGET_VECTOR"
-  "vl<order>xseg<nf>ei<RATIO4I:sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO4I:sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vlsegd<order>x")
    (set_attr "mode" "<V16T:MODE>")])
 
@@ -8531,7 +8554,7 @@ (define_insn "@pred_indexed_<order>load<V32T:mode><RATIO2I:mode>"
 	     (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)
 	  (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]
   "TARGET_VECTOR"
-  "vl<order>xseg<nf>ei<RATIO2I:sew>.v\t%0,(%z3),%4%p1"
+  { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO2I:sew>.v\t%0,(%z3),%4%p1"; }
   [(set_attr "type" "vlsegd<order>x")
    (set_attr "mode" "<V32T:MODE>")])
 
@@ -8548,7 +8571,7 @@ (define_insn "@pred_indexed_<order>store<V1T:mode><RATIO64I:mode>"
 	   (match_operand:RATIO64I 2 "register_operand"       "   vr")
 	   (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]
   "TARGET_VECTOR"
-  "vs<order>xseg<nf>ei<RATIO64I:sew>.v\t%3,(%z1),%2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO64I:sew>.v\t%3,(%z1),%2%p0"; }
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V1T:MODE>")])
 
@@ -8565,7 +8588,7 @@ (define_insn "@pred_indexed_<order>store<V2T:mode><RATIO32I:mode>"
 	   (match_operand:RATIO32I 2 "register_operand"       "   vr")
 	   (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]
   "TARGET_VECTOR"
-  "vs<order>xseg<nf>ei<RATIO32I:sew>.v\t%3,(%z1),%2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO32I:sew>.v\t%3,(%z1),%2%p0"; }
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V2T:MODE>")])
 
@@ -8582,7 +8605,7 @@ (define_insn "@pred_indexed_<order>store<V4T:mode><RATIO16I:mode>"
 	   (match_operand:RATIO16I 2 "register_operand"       "   vr")
 	   (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]
   "TARGET_VECTOR"
-  "vs<order>xseg<nf>ei<RATIO16I:sew>.v\t%3,(%z1),%2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO16I:sew>.v\t%3,(%z1),%2%p0"; }
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V4T:MODE>")])
 
@@ -8599,7 +8622,7 @@ (define_insn "@pred_indexed_<order>store<V8T:mode><RATIO8I:mode>"
 	   (match_operand:RATIO8I 2 "register_operand"       "   vr")
 	   (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]
   "TARGET_VECTOR"
-  "vs<order>xseg<nf>ei<RATIO8I:sew>.v\t%3,(%z1),%2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO8I:sew>.v\t%3,(%z1),%2%p0"; }
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V8T:MODE>")])
 
@@ -8616,7 +8639,7 @@ (define_insn "@pred_indexed_<order>store<V16T:mode><RATIO4I:mode>"
 	   (match_operand:RATIO4I 2 "register_operand"      "   vr")
 	   (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]
   "TARGET_VECTOR"
-  "vs<order>xseg<nf>ei<RATIO4I:sew>.v\t%3,(%z1),%2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO4I:sew>.v\t%3,(%z1),%2%p0"; }
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V16T:MODE>")])
 
@@ -8633,7 +8656,7 @@ (define_insn "@pred_indexed_<order>store<V32T:mode><RATIO2I:mode>"
 	   (match_operand:RATIO2I 2 "register_operand"      "   vr")
 	   (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]
   "TARGET_VECTOR"
-  "vs<order>xseg<nf>ei<RATIO2I:sew>.v\t%3,(%z1),%2%p0"
+  { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO2I:sew>.v\t%3,(%z1),%2%p0"; }
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V32T:MODE>")])
 
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1)
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
  2023-11-18  4:26 ` [PATCH v2 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
  2023-11-18  4:28 ` [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
@ 2023-11-18  4:29 ` Jun Sha (Joshua)
  2023-11-18  4:32 ` [PATCH v2 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:29 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.

This patch is adding some tests for binary operations.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp: New test.
---
 .../rvv/xtheadvector/binop_vv_constraint-1.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vv_constraint-3.c  | 27 +++++++
 .../rvv/xtheadvector/binop_vv_constraint-4.c  | 27 +++++++
 .../rvv/xtheadvector/binop_vv_constraint-5.c  | 29 ++++++++
 .../rvv/xtheadvector/binop_vv_constraint-6.c  | 28 +++++++
 .../rvv/xtheadvector/binop_vv_constraint-7.c  | 29 ++++++++
 .../rvv/xtheadvector/binop_vx_constraint-1.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-10.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-2.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-3.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-4.c  | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-5.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-6.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-7.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-8.c  | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-9.c  | 68 +++++++++++++++++
 .../rvv/xtheadvector/rvv-xtheadvector.exp     | 41 +++++++++++
 17 files changed, 939 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp

diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
new file mode 100644
index 00000000000..172dfb6c228
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tu (v3, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_m (mask, v3, v3, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tumu (mask, v3, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
new file mode 100644
index 00000000000..c89635ab85b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
@@ -0,0 +1,27 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_m (m3, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_m (m3, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
new file mode 100644
index 00000000000..3cca8a47ef1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
@@ -0,0 +1,27 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_mu (m3, m3, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_mu (m3, m3, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
new file mode 100644
index 00000000000..45a679b424c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_mu (mask, m3, v, v, 4);
+    m4 = __riscv_vmseq_vv_i32m1_b32_m (m4, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_mu (mask, m3, v, v, 4);
+    m4 = __riscv_vmslt_vv_i32m1_b32_m (m4, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vmv} 2 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
new file mode 100644
index 00000000000..1ef85d556d9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
@@ -0,0 +1,28 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = __riscv_vlm_v_b32 (in, 4);
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v2, 4);
+    vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_mu (m3, mask, v, v, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = __riscv_vlm_v_b32 (in, 4);
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v2, 4);
+    vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_mu (m3, mask, v, v, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vmv} 2 } } */
+
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
new file mode 100644
index 00000000000..bbef0d43664
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_m (m3, v2, v2, 4);
+    m4 = __riscv_vmseq_vv_i32m1_b32_m (m4, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+    vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_m (m3, v2, v2, 4);
+    m4 = __riscv_vmslt_vv_i32m1_b32_m (m4, v2, v2, 4);
+    __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
new file mode 100644
index 00000000000..ed9b0c7c01f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
new file mode 100644
index 00000000000..89616f3d3b0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
new file mode 100644
index 00000000000..e64543b1aac
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
new file mode 100644
index 00000000000..4775a4af325
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
new file mode 100644
index 00000000000..6dd00c8b3b6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
new file mode 100644
index 00000000000..86606537b14
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
new file mode 100644
index 00000000000..e7bede15b86
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
new file mode 100644
index 00000000000..1cd688919f1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
new file mode 100644
index 00000000000..70f525d30ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
new file mode 100644
index 00000000000..0b248b68e0c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
new file mode 100644
index 00000000000..ffc8fee575f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
@@ -0,0 +1,41 @@
+# Copyright (C) 2017-2020 Free Software Foundation, Inc.
+ 
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+ 
+# GCC testsuite that uses the `dg.exp' driver.
+ 
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+  return
+}
+ 
+# Load support procs.
+load_lib gcc-dg.exp
+ 
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+ 
+# Initialize `dg'.
+dg-init
+ 
+# Main loop.
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] \
+	"-I$srcdir/$subdir/../ -std=gnu99 -O2" $DEFAULT_CFLAGS
+ 
+# All done.
+dg-finish
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2)
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
                   ` (2 preceding siblings ...)
  2023-11-18  4:29 ` [PATCH v2 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
@ 2023-11-18  4:32 ` Jun Sha (Joshua)
  2023-11-18  4:34 ` [PATCH v2 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:32 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.

This patch is adding some tests for binary operations.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c: New test.
---
 .../rvv/xtheadvector/binop_vx_constraint-11.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-12.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-13.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-14.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-15.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-16.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-17.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-18.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-19.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-20.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-21.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-22.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-23.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-24.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-25.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-26.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-27.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-28.c | 68 +++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-29.c | 73 +++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-30.c | 68 +++++++++++++++++
 20 files changed, 1405 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c

diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
new file mode 100644
index 00000000000..f9671318a67
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
new file mode 100644
index 00000000000..3e991339a22
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
new file mode 100644
index 00000000000..068e9c32511
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
new file mode 100644
index 00000000000..26af4748453
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
new file mode 100644
index 00000000000..f19130108df
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
new file mode 100644
index 00000000000..3134d1ebe5c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
new file mode 100644
index 00000000000..82e7c668e59
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmul_vx_i32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmul_vx_i32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmul_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
new file mode 100644
index 00000000000..57c548b25c5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmul_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmul_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmul_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
new file mode 100644
index 00000000000..8406970e64e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmax_vx_i32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmax_vx_i32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmax_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
new file mode 100644
index 00000000000..6b34dfa9c79
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmax_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmax_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmax_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
new file mode 100644
index 00000000000..e73bc0f68bc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmin_vx_i32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmin_vx_i32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vmin_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
new file mode 100644
index 00000000000..04f2d292bb4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmin_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmin_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vmin_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
new file mode 100644
index 00000000000..6ce0d028347
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
new file mode 100644
index 00000000000..0536eba14b8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
new file mode 100644
index 00000000000..291b0afdf85
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vminu_vx_u32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
new file mode 100644
index 00000000000..9c85da5b605
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vminu_vx_u32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
new file mode 100644
index 00000000000..bea468b263a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vdiv_vx_i32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, 5, 4);
+    vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
new file mode 100644
index 00000000000..2640324cb4d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vdiv_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
new file mode 100644
index 00000000000..66361ad567d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
new file mode 100644
index 00000000000..901e03bc181
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3)
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
                   ` (3 preceding siblings ...)
  2023-11-18  4:32 ` [PATCH v2 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
@ 2023-11-18  4:34 ` Jun Sha (Joshua)
  2023-11-18  4:35 ` [PATCH v2 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:34 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.

This patch is adding some tests for binary operations.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c: New test.
---
 .../rvv/xtheadvector/binop_vx_constraint-31.c |  73 +++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-32.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-33.c |  73 +++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-34.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-35.c |  73 +++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-36.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-37.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-38.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-39.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-40.c |  73 +++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-41.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-42.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-43.c |  68 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-44.c |  73 +++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-45.c | 123 ++++++++++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-46.c |  72 ++++++++++
 .../rvv/xtheadvector/binop_vx_constraint-47.c |  16 +++
 .../rvv/xtheadvector/binop_vx_constraint-48.c |  16 +++
 .../rvv/xtheadvector/binop_vx_constraint-49.c |  16 +++
 .../rvv/xtheadvector/binop_vx_constraint-50.c |  18 +++
 20 files changed, 1238 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c

diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
new file mode 100644
index 00000000000..66361ad567d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
new file mode 100644
index 00000000000..901e03bc181
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
new file mode 100644
index 00000000000..651244f7a0d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
new file mode 100644
index 00000000000..25460cd3f17
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
new file mode 100644
index 00000000000..651244f7a0d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
new file mode 100644
index 00000000000..25460cd3f17
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+    vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
new file mode 100644
index 00000000000..aca803f3930
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
new file mode 100644
index 00000000000..ce9261f67e3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, -15, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_tu (v3, v2, -15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, -15, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_m (mask, v3, -15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, -15, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_tumu (mask, v3, v2, -15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
new file mode 100644
index 00000000000..3adb7ae8f79
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_tu (v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_m (mask, v3, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vsub_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
new file mode 100644
index 00000000000..995b52130cb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 17, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, 17, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 17, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, 17, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 17, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, 17, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
new file mode 100644
index 00000000000..7c4b1e78ca3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
new file mode 100644
index 00000000000..b971a9af222
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
new file mode 100644
index 00000000000..ae23fa67f02
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 15, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
new file mode 100644
index 00000000000..120230d1f2c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**  ...
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 16, 4);
+    vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
new file mode 100644
index 00000000000..cec8a0b8012
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
@@ -0,0 +1,123 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcxtheadvector -mabi=lp64d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, -16, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, -16, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 15, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 15, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 16, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 16, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAA, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAAA, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, x, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, x, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
new file mode 100644
index 00000000000..7210890f20f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
@@ -0,0 +1,72 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, -16, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, -16, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 15, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 15, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 16, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 16, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAA, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAA, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
new file mode 100644
index 00000000000..0351e452d5f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAA, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, 0xAAAAAAAA, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
new file mode 100644
index 00000000000..3b849e906db
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, 0xAAAAAAAAAAAAAAAA, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
new file mode 100644
index 00000000000..4a18a410252
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, x, 4);
+  vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, x, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
new file mode 100644
index 00000000000..6713316fcab
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int32_t x, int n)
+{
+  for (int i = 0; i < n; i++) {
+    vint64m1_t v = __riscv_vle64_v_i64m1 (in + i + 1, 4);
+    vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + i + 2, 4);
+    vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, x, 4);
+    vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, x, 4);
+    __riscv_vse64_v_i64m1 (out + i + 2, v4, 4);
+  }
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero\s+\.L[0-9]+\:\s+} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4)
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
                   ` (4 preceding siblings ...)
  2023-11-18  4:34 ` [PATCH v2 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
@ 2023-11-18  4:35 ` Jun Sha (Joshua)
  2023-11-18  4:37 ` [PATCH v2 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:35 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.

This patch is adding some tests for ternary and unary operations.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c: New test.
---
 .../rvv/xtheadvector/ternop_vv_constraint-1.c |  83 +++++++++++
 .../rvv/xtheadvector/ternop_vv_constraint-2.c |  83 +++++++++++
 .../rvv/xtheadvector/ternop_vv_constraint-3.c |  83 +++++++++++
 .../rvv/xtheadvector/ternop_vv_constraint-4.c |  83 +++++++++++
 .../rvv/xtheadvector/ternop_vv_constraint-5.c |  83 +++++++++++
 .../rvv/xtheadvector/ternop_vv_constraint-6.c |  83 +++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-1.c |  71 ++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-2.c |  38 +++++
 .../rvv/xtheadvector/ternop_vx_constraint-3.c | 125 +++++++++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-4.c | 123 +++++++++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-5.c | 123 +++++++++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-6.c | 130 ++++++++++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-7.c | 130 ++++++++++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-8.c |  71 ++++++++++
 .../rvv/xtheadvector/ternop_vx_constraint-9.c |  71 ++++++++++
 .../rvv/xtheadvector/unop_v_constraint-1.c    |  68 +++++++++
 16 files changed, 1448 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c

diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
new file mode 100644
index 00000000000..d98755e7040
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vv_i32m1 (v, v2, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vv_i32m1(v3, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1 (v4, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vv_i32m1_tu (v, v2, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vv_i32m1_tu(v3, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1_tu (v4, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+    vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vv_i32m1_m (m, v, v2, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vv_i32m1_m(m, v3, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vmacc_vv_i32m1_m (m, v4, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
new file mode 100644
index 00000000000..e9d2c7f10a5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmadd_vv_i32m1 (v, v2, v2, 4);
+    vint32m1_t v4 = __riscv_vmadd_vv_i32m1(v3, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1 (v4, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmadd_vv_i32m1_tu (v, v2, v2, 4);
+    vint32m1_t v4 = __riscv_vmadd_vv_i32m1_tu(v3, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1_tu (v4, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+    vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmadd_vv_i32m1_m (m, v, v2, v2, 4);
+    vint32m1_t v4 = __riscv_vmadd_vv_i32m1_m(m, v3, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vmadd_vv_i32m1_m (m, v4, v2, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
new file mode 100644
index 00000000000..2f70761558d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1 (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1(v3, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1_tu (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1_tu(v3, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+    vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1_m (m, v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1_m(m, v3, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
new file mode 100644
index 00000000000..0ba9c866b32
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1 (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1(v3, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1_tu (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1_tu(v3, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+    vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1_m (m, v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1_m(m, v3, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
new file mode 100644
index 00000000000..e913cfe9ef8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1 (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1(v3, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1_tu (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1_tu(v3, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+    vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1_m (m, v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1_m(m, v3, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
new file mode 100644
index 00000000000..ced00a2e43e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1 (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1(v3, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1_tu (v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1_tu(v3, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+    vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1_m (m, v, v2, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1_m(m, v3, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+    v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
new file mode 100644
index 00000000000..34e6fe355a3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tumu (mask, v3, x, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
new file mode 100644
index 00000000000..290981625bf
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
@@ -0,0 +1,38 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void * in2, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+void f2 (void * in, void * in2, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+void f3 (void * in, void * in2, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+    vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+    vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tumu (mask, v3, x, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vma[c-d][c-d]\.vx\s+v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+\s+} 5 } } */
+/* { dg-final { scan-assembler-times {th.vma[c-d][c-d]\.vx\s+v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,\s*v0.t} 1 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
new file mode 100644
index 00000000000..491cd2d42af
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
@@ -0,0 +1,125 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcxtheadvector -mabi=lp64d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, -16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, -16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 15, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 15, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, x, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, x, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
new file mode 100644
index 00000000000..70f249bfc8b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
@@ -0,0 +1,123 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, -16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, -16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 15, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 15, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**  ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**  ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, x, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, x, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
new file mode 100644
index 00000000000..3de929de136
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
@@ -0,0 +1,123 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, -16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, -16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 15, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 15, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 0xAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 0xAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**  ...
+**	ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 0xAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 0xAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**  ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**  ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, x, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, x, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
new file mode 100644
index 00000000000..ceef8794297
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask, v2, -16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask, v3, -16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 15, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 15, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 0xAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 0xAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 0xAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 0xAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**  ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**  ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, x, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, x, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
new file mode 100644
index 00000000000..6e524489176
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask, v2, -16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask, v3, -16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 15, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 15, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 16, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 16, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 0xAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 0xAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+**  ...
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**  ...
+**	ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 0xAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 0xAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**  ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+**  ...
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**	th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+**  ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+  vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+  vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+  vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+  vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, x, v2, 4);
+  vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, x, v3, 4);
+  __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
new file mode 100644
index 00000000000..16f03203276
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out, float x)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out, float x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void *out, float x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+    vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tumu (mask, v3, x, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
new file mode 100644
index 00000000000..13bd7f762f2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void * in2, void *out, float x)
+{
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void * in2, void *out, float x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tu (v3, x, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+**	th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+**	th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void * in2, void *out, float x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+    vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+    vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+    vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tumu (mask, v3, x, v2, 4);
+    __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
new file mode 100644
index 00000000000..95b35d3ad36
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out)
+{
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vneg_v_i32m1 (v2, 4);
+    vint32m1_t v4 = __riscv_vneg_v_i32m1_tu (v3, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vneg_v_i32m1 (v2, 4);
+    vint32m1_t v4 = __riscv_vneg_v_i32m1_m (mask, v3, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**  ...
+**	th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+**  ...
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vle\.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vneg_v_i32m1 (v2, 4);
+    vint32m1_t v4 = __riscv_vneg_v_i32m1_tumu (mask, v3, v2, 4);
+    __riscv_vse32_v_i32m1 (out, v4, 4);
+}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
                   ` (5 preceding siblings ...)
  2023-11-18  4:35 ` [PATCH v2 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
@ 2023-11-18  4:37 ` Jun Sha (Joshua)
  2023-11-18  4:39 ` [PATCH v2 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
  8 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:37 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

This patch involves the generation of xtheadvector special
load/store instructions.

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/ChangeLog:

	* config/riscv/riscv-vector-builtins-bases.cc
	(class th_loadstore_width): Define new builtin bases.
	(BASE): Define new builtin bases.
	* config/riscv/riscv-vector-builtins-bases.h:
	Define new builtin class.
	* config/riscv/riscv-vector-builtins-functions.def (vlsegff):
	Include thead-vector-builtins-functions.def.
	* config/riscv/riscv-vector-builtins-shapes.cc
	(struct th_loadstore_width_def): Define new builtin shapes.
	(struct th_indexed_loadstore_width_def):
	Define new builtin shapes.
	(SHAPE): Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-shapes.h:
	Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-types.def
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	(vint8m1_t): Add datatypes for XTheadVector.
	(vint8m2_t): Likewise.
	(vint8m4_t): Likewise.
	(vint8m8_t): Likewise.
	(vint16m1_t): Likewise.
	(vint16m2_t): Likewise.
	(vint16m4_t): Likewise.
	(vint16m8_t): Likewise.
	(vint32m1_t): Likewise.
	(vint32m2_t): Likewise.
	(vint32m4_t): Likewise.
	(vint32m8_t): Likewise.
	(vint64m1_t): Likewise.
	(vint64m2_t): Likewise.
	(vint64m4_t): Likewise.
	(vint64m8_t): Likewise.
	(vuint8m1_t): Likewise.
	(vuint8m2_t): Likewise.
	(vuint8m4_t): Likewise.
	(vuint8m8_t): Likewise.
	(vuint16m1_t): Likewise.
	(vuint16m2_t): Likewise.
	(vuint16m4_t): Likewise.
	(vuint16m8_t): Likewise.
	(vuint32m1_t): Likewise.
	(vuint32m2_t): Likewise.
	(vuint32m4_t): Likewise.
	(vuint32m8_t): Likewise.
	(vuint64m1_t): Likewise.
	(vuint64m2_t): Likewise.
	(vuint64m4_t): Likewise.
	(vuint64m8_t): Likewise.
	* config/riscv/riscv-vector-builtins.cc
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/thead-vector-builtins-functions.def: New file.
	* config/riscv/thead-vector.md: New file.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c: New test.
---
 .../riscv/riscv-vector-builtins-bases.cc      | 122 +++++++
 .../riscv/riscv-vector-builtins-bases.h       |  30 ++
 .../riscv/riscv-vector-builtins-functions.def |   2 +
 .../riscv/riscv-vector-builtins-shapes.cc     | 100 ++++++
 .../riscv/riscv-vector-builtins-shapes.h      |   2 +
 .../riscv/riscv-vector-builtins-types.def     | 120 +++++++
 gcc/config/riscv/riscv-vector-builtins.cc     | 300 +++++++++++++++++-
 .../riscv/thead-vector-builtins-functions.def |  30 ++
 gcc/config/riscv/thead-vector.md              | 235 ++++++++++++++
 gcc/config/riscv/vector.md                    |   1 +
 .../riscv/rvv/xtheadvector/vlb-vsb.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlbu-vsb.c         |  68 ++++
 .../riscv/rvv/xtheadvector/vlh-vsh.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlhu-vsh.c         |  68 ++++
 .../riscv/rvv/xtheadvector/vlw-vsw.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlwu-vsw.c         |  68 ++++
 16 files changed, 1349 insertions(+), 1 deletion(-)
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index d70468542ee..186bc4a9bf1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -268,6 +268,66 @@ public:
   }
 };
 
+/* Implements
+ * th.vl(b/h/w)[u].v/th.vs(b/h/w)[u].v/th.vls(b/h/w)[u].v/th.vss(b/h/w)[u].v/
+ * th.vlx(b/h/w)[u].v/th.vs[u]x(b/h/w).v
+ * codegen.  */
+template<bool STORE_P, lst_type LST_TYPE, int UNSPEC>
+class th_loadstore_width : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return !STORE_P; }
+  bool apply_mask_policy_p () const override { return !STORE_P; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P || LST_TYPE == LST_INDEXED)
+      return true;
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    if (LST_TYPE == LST_INDEXED)
+      {
+	if (STORE_P)
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_store_width (UNSPEC, UNSPEC,
+					       e.vector_mode ()));
+	else
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else if (LST_TYPE == LST_STRIDED)
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_strided_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_strided_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov_width (UNSPEC, e.vector_mode ()));
+      }
+  }
+};
+
 /* Implements
    vadd/vsub/vand/vor/vxor/vsll/vsra/vsrl/
    vmin/vmax/vminu/vmaxu/vdiv/vrem/vdivu/
@@ -2384,6 +2444,37 @@ static CONSTEXPR const seg_indexed_store<UNSPEC_UNORDERED> vsuxseg_obj;
 static CONSTEXPR const seg_indexed_store<UNSPEC_ORDERED> vsoxseg_obj;
 static CONSTEXPR const vlsegff vlsegff_obj;
 
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vlb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLBU> th_vlbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vlh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLHU> th_vlhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vlw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLWU> th_vlwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vsb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vsh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSB> th_vlsb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSBU> th_vlsbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSH> th_vlsh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSHU> th_vlshu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSW> th_vlsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSWU> th_vlswu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSB> th_vssb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSH> th_vssh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSW> th_vssw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXB> th_vlxb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXBU> th_vlxbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXH> th_vlxh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXHU> th_vlxhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXW> th_vlxw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXWU> th_vlxwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXB> th_vsxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXH> th_vsxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXW> th_vsxw_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXB> th_vsuxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXH> th_vsuxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXW> th_vsuxw_obj;
+
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
 #define BASE(NAME) \
@@ -2646,4 +2737,35 @@ BASE (vsuxseg)
 BASE (vsoxseg)
 BASE (vlsegff)
 
+BASE (th_vlb)
+BASE (th_vlh)
+BASE (th_vlw)
+BASE (th_vlbu)
+BASE (th_vlhu)
+BASE (th_vlwu)
+BASE (th_vsb)
+BASE (th_vsh)
+BASE (th_vsw)
+BASE (th_vlsb)
+BASE (th_vlsh)
+BASE (th_vlsw)
+BASE (th_vlsbu)
+BASE (th_vlshu)
+BASE (th_vlswu)
+BASE (th_vssb)
+BASE (th_vssh)
+BASE (th_vssw)
+BASE (th_vlxb)
+BASE (th_vlxh)
+BASE (th_vlxw)
+BASE (th_vlxbu)
+BASE (th_vlxhu)
+BASE (th_vlxwu)
+BASE (th_vsxb)
+BASE (th_vsxh)
+BASE (th_vsxw)
+BASE (th_vsuxb)
+BASE (th_vsuxh)
+BASE (th_vsuxw)
+
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 131041ea66f..a062ff6dc95 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -48,6 +48,36 @@ extern const function_base *const vsoxei8;
 extern const function_base *const vsoxei16;
 extern const function_base *const vsoxei32;
 extern const function_base *const vsoxei64;
+extern const function_base *const th_vlb;
+extern const function_base *const th_vlh;
+extern const function_base *const th_vlw;
+extern const function_base *const th_vlbu;
+extern const function_base *const th_vlhu;
+extern const function_base *const th_vlwu;
+extern const function_base *const th_vsb;
+extern const function_base *const th_vsh;
+extern const function_base *const th_vsw;
+extern const function_base *const th_vlsb;
+extern const function_base *const th_vlsh;
+extern const function_base *const th_vlsw;
+extern const function_base *const th_vlsbu;
+extern const function_base *const th_vlshu;
+extern const function_base *const th_vlswu;
+extern const function_base *const th_vssb;
+extern const function_base *const th_vssh;
+extern const function_base *const th_vssw;
+extern const function_base *const th_vlxb;
+extern const function_base *const th_vlxh;
+extern const function_base *const th_vlxw;
+extern const function_base *const th_vlxbu;
+extern const function_base *const th_vlxhu;
+extern const function_base *const th_vlxwu;
+extern const function_base *const th_vsxb;
+extern const function_base *const th_vsxh;
+extern const function_base *const th_vsxw;
+extern const function_base *const th_vsuxb;
+extern const function_base *const th_vsuxh;
+extern const function_base *const th_vsuxw;
 extern const function_base *const vadd;
 extern const function_base *const vsub;
 extern const function_base *const vrsub;
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 1c37fd5fffe..3e7e134a924 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -651,4 +651,6 @@ DEF_RVV_FUNCTION (vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_p
 DEF_RVV_FUNCTION (vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
 DEF_RVV_FUNCTION (vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
 
+#include "thead-vector-builtins-functions.def"
+
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..e24c535e496 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -188,6 +188,104 @@ struct indexed_loadstore_def : public function_shape
   }
 };
 
+/* th_loadstore_width_def class.  */
+struct th_loadstore_width_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return nullptr;
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
+
+/* th_indexed_loadstore_width_def class.  */
+struct th_indexed_loadstore_width_def : public function_shape
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    for (unsigned int pred_idx = 0; group.preds[pred_idx] != NUM_PRED_TYPES;
+	 ++pred_idx)
+      {
+	for (unsigned int vec_type_idx = 0;
+	     group.ops_infos.types[vec_type_idx].index != NUM_VECTOR_TYPES;
+	     ++vec_type_idx)
+	  {
+	   tree index_type = group.ops_infos.args[1].get_tree_type (
+	      group.ops_infos.types[vec_type_idx].index);
+	   if (!index_type)
+	      continue;
+	   build_one (b, group, pred_idx, vec_type_idx);
+	  }
+      }
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
 /* alu_def class.  */
 struct alu_def : public build_base
 {
@@ -988,6 +1086,8 @@ SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
 SHAPE(indexed_loadstore, indexed_loadstore)
+SHAPE(th_loadstore_width, th_loadstore_width)
+SHAPE(th_indexed_loadstore_width, th_indexed_loadstore_width)
 SHAPE(alu, alu)
 SHAPE(alu_frm, alu_frm)
 SHAPE(widen_alu, widen_alu)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..1d93895b87a 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -28,6 +28,8 @@ extern const function_shape *const vsetvl;
 extern const function_shape *const vsetvlmax;
 extern const function_shape *const loadstore;
 extern const function_shape *const indexed_loadstore;
+extern const function_shape *const th_loadstore_width;
+extern const function_shape *const th_indexed_loadstore_width;
 extern const function_shape *const alu;
 extern const function_shape *const alu_frm;
 extern const function_shape *const widen_alu;
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 6aa45ae9a7e..74b1be6498c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -24,12 +24,48 @@ along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_I_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_I8_OPS" macro include all signed integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I8_OPS
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I16_OPS" macro include all signed integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I16_OPS
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I32_OPS" macro include all signed integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I32_OPS
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_U_OPS
 #define DEF_RVV_U_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_U8_OPS" macro include all unsigned integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U8_OPS
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U16_OPS" macro include all unsigned integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U16_OPS
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U32_OPS" macro include all unsigned integer which will be
+   iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U32_OPS
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_F_OPS
@@ -362,6 +398,45 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_I8_OPS (vint8m1_t, 0)
+DEF_RVV_I8_OPS (vint8m2_t, 0)
+DEF_RVV_I8_OPS (vint8m4_t, 0)
+DEF_RVV_I8_OPS (vint8m8_t, 0)
+DEF_RVV_I8_OPS (vint16m1_t, 0)
+DEF_RVV_I8_OPS (vint16m2_t, 0)
+DEF_RVV_I8_OPS (vint16m4_t, 0)
+DEF_RVV_I8_OPS (vint16m8_t, 0)
+DEF_RVV_I8_OPS (vint32m1_t, 0)
+DEF_RVV_I8_OPS (vint32m2_t, 0)
+DEF_RVV_I8_OPS (vint32m4_t, 0)
+DEF_RVV_I8_OPS (vint32m8_t, 0)
+DEF_RVV_I8_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I16_OPS (vint16m1_t, 0)
+DEF_RVV_I16_OPS (vint16m2_t, 0)
+DEF_RVV_I16_OPS (vint16m4_t, 0)
+DEF_RVV_I16_OPS (vint16m8_t, 0)
+DEF_RVV_I16_OPS (vint32m1_t, 0)
+DEF_RVV_I16_OPS (vint32m2_t, 0)
+DEF_RVV_I16_OPS (vint32m4_t, 0)
+DEF_RVV_I16_OPS (vint32m8_t, 0)
+DEF_RVV_I16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I32_OPS (vint32m1_t, 0)
+DEF_RVV_I32_OPS (vint32m2_t, 0)
+DEF_RVV_I32_OPS (vint32m4_t, 0)
+DEF_RVV_I32_OPS (vint32m8_t, 0)
+DEF_RVV_I32_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_U_OPS (vuint8mf4_t, 0)
 DEF_RVV_U_OPS (vuint8mf2_t, 0)
@@ -385,6 +460,45 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_U8_OPS (vuint8m1_t, 0)
+DEF_RVV_U8_OPS (vuint8m2_t, 0)
+DEF_RVV_U8_OPS (vuint8m4_t, 0)
+DEF_RVV_U8_OPS (vuint8m8_t, 0)
+DEF_RVV_U8_OPS (vuint16m1_t, 0)
+DEF_RVV_U8_OPS (vuint16m2_t, 0)
+DEF_RVV_U8_OPS (vuint16m4_t, 0)
+DEF_RVV_U8_OPS (vuint16m8_t, 0)
+DEF_RVV_U8_OPS (vuint32m1_t, 0)
+DEF_RVV_U8_OPS (vuint32m2_t, 0)
+DEF_RVV_U8_OPS (vuint32m4_t, 0)
+DEF_RVV_U8_OPS (vuint32m8_t, 0)
+DEF_RVV_U8_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U16_OPS (vuint16m1_t, 0)
+DEF_RVV_U16_OPS (vuint16m2_t, 0)
+DEF_RVV_U16_OPS (vuint16m4_t, 0)
+DEF_RVV_U16_OPS (vuint16m8_t, 0)
+DEF_RVV_U16_OPS (vuint32m1_t, 0)
+DEF_RVV_U16_OPS (vuint32m2_t, 0)
+DEF_RVV_U16_OPS (vuint32m4_t, 0)
+DEF_RVV_U16_OPS (vuint32m8_t, 0)
+DEF_RVV_U16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U32_OPS (vuint32m1_t, 0)
+DEF_RVV_U32_OPS (vuint32m2_t, 0)
+DEF_RVV_U32_OPS (vuint32m4_t, 0)
+DEF_RVV_U32_OPS (vuint32m8_t, 0)
+DEF_RVV_U32_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
@@ -1356,7 +1470,13 @@ DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
 
 #undef DEF_RVV_I_OPS
+#undef DEF_RVV_I8_OPS
+#undef DEF_RVV_I16_OPS
+#undef DEF_RVV_I32_OPS
 #undef DEF_RVV_U_OPS
+#undef DEF_RVV_U8_OPS
+#undef DEF_RVV_U16_OPS
+#undef DEF_RVV_U32_OPS
 #undef DEF_RVV_F_OPS
 #undef DEF_RVV_B_OPS
 #undef DEF_RVV_WEXTI_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 6330a3a41c3..c2f1f6d1a9b 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -246,6 +246,63 @@ static const rvv_type_info iu_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u8_ops[] = {
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u16_ops[] = {
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u32_ops[] = {
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 /* A list of all types will be registered for intrinsic functions.  */
 static const rvv_type_info all_ops[] = {
 #define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
@@ -913,7 +970,32 @@ static CONSTEXPR const rvv_arg_type_info tuple_vcreate_args[]
 
 /* A list of args for vector_type func (vector_type) function.  */
 static CONSTEXPR const rvv_arg_type_info ext_vcreate_args[]
-  = {rvv_arg_type_info (RVV_BASE_vector),
+  = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, size_t)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, eew8_index_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, eew8_index_type, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, size_t, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
      rvv_arg_type_info_end};
 
 /* A list of none preds that will be registered for intrinsic functions.  */
@@ -2604,6 +2686,222 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
      rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
      ext_vcreate_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args  */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_size_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_size_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_size_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_size_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_size_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_size_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_index_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_index_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_index_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_index_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_index_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_index_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew8_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_index_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew16_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_index_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew32_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_index_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type,
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_size_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_size_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_size_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
 /* A list of all RVV base function types.  */
 static CONSTEXPR const function_type_info function_types[] = {
 #define DEF_RVV_TYPE_INDEX(                                                    \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..2885e7a475c
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,30 @@
+DEF_RVV_FUNCTION (th_vlb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlhu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlwu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vsb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vlsb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlshu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlswu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlxb, th_indexed_loadstore_width, full_preds, i8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxh, th_indexed_loadstore_width, full_preds, i16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxw, th_indexed_loadstore_width, full_preds, i32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxbu, th_indexed_loadstore_width, full_preds, u8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxhu, th_indexed_loadstore_width, full_preds, u16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxwu, th_indexed_loadstore_width, full_preds, u32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..d1e9f305922
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,235 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW
+  UNSPEC_TH_VLWU
+
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW
+  UNSPEC_TH_VLSWU
+
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VLXWU
+
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+])
+
+(define_int_iterator UNSPEC_TH_VLMEM_OP [
+  UNSPEC_TH_VLB UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW UNSPEC_TH_VLWU
+])
+
+(define_int_iterator UNSPEC_TH_VLSMEM_OP [
+  UNSPEC_TH_VLSB UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW UNSPEC_TH_VLSWU
+])
+
+(define_int_iterator UNSPEC_TH_VLXMEM_OP [
+  UNSPEC_TH_VLXB UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW UNSPEC_TH_VLXWU
+])
+
+(define_int_attr vlmem_op_attr [
+  (UNSPEC_TH_VLB "b") (UNSPEC_TH_VLBU "bu")
+  (UNSPEC_TH_VLH "h") (UNSPEC_TH_VLHU "hu")
+  (UNSPEC_TH_VLW "w") (UNSPEC_TH_VLWU "wu")
+  (UNSPEC_TH_VLSB "b") (UNSPEC_TH_VLSBU "bu")
+  (UNSPEC_TH_VLSH "h") (UNSPEC_TH_VLSHU "hu")
+  (UNSPEC_TH_VLSW "w") (UNSPEC_TH_VLSWU "wu")
+  (UNSPEC_TH_VLXB "b") (UNSPEC_TH_VLXBU "bu")
+  (UNSPEC_TH_VLXH "h") (UNSPEC_TH_VLXHU "hu")
+  (UNSPEC_TH_VLXW "w") (UNSPEC_TH_VLXWU "wu")
+  (UNSPEC_TH_VSUXB "b")
+  (UNSPEC_TH_VSUXH "h")
+  (UNSPEC_TH_VSUXW "w")
+])
+
+(define_int_attr vlmem_order_attr [
+  (UNSPEC_TH_VLXB "")
+  (UNSPEC_TH_VLXH "")
+  (UNSPEC_TH_VLXW "")
+  (UNSPEC_TH_VSUXB "u")
+  (UNSPEC_TH_VSUXH "u")
+  (UNSPEC_TH_VSUXW "u")
+])
+
+(define_int_iterator UNSPEC_TH_VSMEM_OP [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLW
+])
+
+(define_int_iterator UNSPEC_TH_VSSMEM_OP [
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSW
+])
+
+(define_int_iterator UNSPEC_TH_VSXMEM_OP [
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+])
+
+;; Vector Unit-Stride Instructions
+(define_expand "@pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand")
+	 (match_operand 4 "vector_length_operand")
+	 (match_operand 5 "const_int_operand")
+	 (match_operand 6 "const_int_operand")
+	 (match_operand 7 "const_int_operand")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "vector_move_operand")
+      (match_operand:V_VLS 2 "vector_merge_operand")))]
+  "TARGET_XTHEADVECTOR"
+  {})
+
+(define_insn_and_split "*pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"	    "=vr,    vr,    vd,     m,    vr,    vr")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand"	   "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")
+	 (match_operand 4 "vector_length_operand"	      "   rK,    rK,    rK,    rK,    rK,    rK")
+	 (match_operand 5 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 6 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 7 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "reg_or_mem_operand"	      "    m,     m,     m,    vr,    vr,    vr")
+      (match_operand:V_VLS 2 "vector_merge_operand"	    "    0,    vu,    vu,    vu,    vu,     0")))]
+  "(TARGET_XTHEADVECTOR
+    && (register_operand (operands[0], <MODE>mode)
+	|| register_operand (operands[3], <MODE>mode)))"
+  "@
+   th.vl<vlmem_op_attr>.v\t%0,%3%p1
+   th.vl<vlmem_op_attr>.v\t%0,%3
+   th.vl<vlmem_op_attr>.v\t%0,%3,%1.t
+   th.vs<vlmem_op_attr>.v\t%3,%0%p1
+   th.vmv.v.v\t%0,%3
+   th.vmv.v.v\t%0,%3"
+  "&& register_operand (operands[0], <MODE>mode)
+   && register_operand (operands[3], <MODE>mode)
+   && satisfies_constraint_vu (operands[2])
+   && INTVAL (operands[7]) == riscv_vector::VLMAX"
+  [(set (match_dup 0) (match_dup 3))]
+  ""
+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 3 "vector_length_operand"    "   rK")
+	     (match_operand 4 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSMEM_OP)
+	  (match_operand:VI 2 "register_operand"	 "    vr")
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "th.vs<vlmem_op_attr>.v\t%2,%0%p1"
+  [(set_attr "type" "vste")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 4))
+   (set_attr "vl_op_idx" "3")])
+
+;; Vector Strided Instructions
+(define_insn "@pred_strided_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	      "=vr,    vr,    vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 7 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 8 "const_int_operand"	"    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLSMEM_OP)
+	  (unspec:VI
+	    [(match_operand:VI 3 "memory_operand"	 "    m,     m,     m")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")] UNSPEC_TH_VLSMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"      "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "th.vls<vlmem_op_attr>.v\t%0,%3,%z4%p1"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSSMEM_OP)
+	  (unspec:VI
+	    [(match_operand 2 "pmode_reg_or_0_operand"   "   rJ")
+	     (match_operand:VI 3 "register_operand"       "   vr")] UNSPEC_TH_VSSMEM_OP)
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "th.vss<vlmem_op_attr>.v\t%3,%0,%z2%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+;; Vector Indexed Instructions
+(define_insn "@pred_indexed_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	     "=vd, vr,vd, vr")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")
+	     (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")
+	     (match_operand 6 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 7 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 8 "const_int_operand"	 "  i,  i, i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLXMEM_OP)
+	  (unspec:VI
+	    [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:VI 4 "register_operand" " vr, vr,vr, vr")] UNSPEC_TH_VLXMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "th.vlx<vlmem_op_attr>.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vldux")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_indexed_<vlmem_order_attr>store_width<vlmem_op_attr><mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSXMEM_OP)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:VI 2 "register_operand" "  vr")
+	   (match_operand:VI 3 "register_operand"  "  vr")] UNSPEC_TH_VSXMEM_OP))]
+  "TARGET_XTHEADVECTOR"
+  "th.vs<vlmem_order_attr>x<vlmem_op_attr>.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<MODE>")])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 2af237854f9..a920264f35b 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -8660,5 +8660,6 @@ (define_insn "@pred_indexed_<order>store<V32T:mode><RATIO2I:mode>"
   [(set_attr "type" "vssegt<order>x")
    (set_attr "mode" "<V32T:MODE>")])
 
+(include "thead-vector.md")
 (include "autovec.md")
 (include "autovec-opt.md")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
new file mode 100644
index 00000000000..740cbee1c95
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	...
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out)
+{
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tu (v3, v2, v2, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_m (mask, v3, v3, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tumu (mask, v3, v2, v2, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
new file mode 100644
index 00000000000..ec34fee577f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	...
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
new file mode 100644
index 00000000000..ac242af3462
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	...
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
new file mode 100644
index 00000000000..211b120fdd5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	...
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
new file mode 100644
index 00000000000..d192a3b2eae
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	...
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
new file mode 100644
index 00000000000..28ee044c1e1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	...
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	...
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	...
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
                   ` (6 preceding siblings ...)
  2023-11-18  4:37 ` [PATCH v2 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
@ 2023-11-18  4:39 ` Jun Sha (Joshua)
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
  8 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-18  4:39 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, Jun Sha (Joshua)

Because the XTheadVector extension does not support fractional
operations, so we need to delete the related intrinsics.

The types involved are as follows:
v(u)int8mf8_t,
v(u)int8mf4_t,
v(u)int8mf2_t,
v(u)int16mf4_t,
v(u)int16mf2_t,
v(u)int32mf2_t,
vfloat16mf4_t,
vfloat16mf2_t,
vfloat32mf2_t

Contributors:
	Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
	Jin Ma <jinma@linux.alibaba.com>
	Christoph Müllner <christoph.muellner@vrull.eu>

gcc/ChangeLog:

	* config/riscv/riscv-protos.h (riscv_v_ext_mode_p):
	New extern.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	New function.
	(build_one): If the checked types fail, no function is generated.
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p): Likewise.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/fractional-type.c: New test.
---
 gcc/config/riscv/riscv-protos.h               |   1 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  22 +++
 gcc/config/riscv/riscv-vector-switch.def      | 144 +++++++++---------
 gcc/config/riscv/riscv.cc                     |   2 +-
 .../gcc.target/riscv/rvv/fractional-type.c    |  79 ++++++++++
 5 files changed, 175 insertions(+), 73 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 8cdfadbcf10..7de4f81aa9a 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -153,6 +153,7 @@ extern poly_uint64 riscv_regmode_natural_size (machine_mode);
 extern bool riscv_v_ext_vector_mode_p (machine_mode);
 extern bool riscv_v_ext_tuple_mode_p (machine_mode);
 extern bool riscv_v_ext_vls_mode_p (machine_mode);
+extern bool riscv_v_ext_mode_p (machine_mode);
 extern int riscv_get_v_regno_alignment (machine_mode);
 extern bool riscv_shamt_matches_mask_p (int, HOST_WIDE_INT);
 extern void riscv_subword_address (rtx, rtx *, rtx *, rtx *, rtx *);
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index e24c535e496..dcdb9506ff2 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,24 @@
 
 namespace riscv_vector {
 
+/* Check whether the RET and ARGS are valid for the function.  */
+
+static bool
+check_type (tree ret, vec<tree> &args)
+{
+  tree arg;
+  unsigned i;
+
+  if (!ret || (builtin_type_p (ret) && !riscv_v_ext_mode_p (TYPE_MODE (ret))))
+    return false;
+
+  FOR_EACH_VEC_ELT (args, i, arg)
+    if (!arg || (builtin_type_p (arg) && !riscv_v_ext_mode_p (TYPE_MODE (arg))))
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +67,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f17f87f89c9 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 754107cdaac..059b82c01ef 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1263,7 +1263,7 @@ riscv_v_ext_vls_mode_p (machine_mode mode)
 
 /* Return true if it is either RVV vector mode or RVV tuple mode.  */
 
-static bool
+bool
 riscv_v_ext_mode_p (machine_mode mode)
 {
   return riscv_v_ext_vector_mode_p (mode) || riscv_v_ext_tuple_mode_p (mode)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c b/gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
new file mode 100644
index 00000000000..c0e5c5ef4db
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
@@ -0,0 +1,79 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_zvfh_xtheadvector -mabi=ilp32d -O3" } */
+
+#include "riscv_vector.h"
+
+void invalid_type ()
+{
+  vint8mf8_t v1;	/* { dg-error {unknown type name 'vint8mf8_t'} } */
+  vint8mf4_t v2;	/* { dg-error {unknown type name 'vint8mf4_t'} } */
+  vint8mf2_t v3;	/* { dg-error {unknown type name 'vint8mf2_t'} } */
+  vint16mf4_t v4;	/* { dg-error {unknown type name 'vint16mf4_t'} } */
+  vint16mf2_t v5;	/* { dg-error {unknown type name 'vint16mf2_t'} } */
+  vint32mf2_t v6;	/* { dg-error {unknown type name 'vint32mf2_t'} } */
+  vuint8mf8_t v7;	/* { dg-error {unknown type name 'vuint8mf8_t'} } */
+  vuint8mf4_t v8;	/* { dg-error {unknown type name 'vuint8mf4_t'} } */
+  vuint8mf2_t v9;	/* { dg-error {unknown type name 'vuint8mf2_t'} } */
+  vuint16mf4_t v10;	/* { dg-error {unknown type name 'vuint16mf4_t'} } */
+  vuint16mf2_t v11;	/* { dg-error {unknown type name 'vuint16mf2_t'} } */
+  vuint32mf2_t v12;	/* { dg-error {unknown type name 'vuint32mf2_t'} } */
+  vfloat16mf4_t v13;	/* { dg-error {unknown type name 'vfloat16mf4_t'} } */
+  vfloat16mf2_t v14;	/* { dg-error {unknown type name 'vfloat16mf2_t'} } */
+  vfloat32mf2_t v15;	/* { dg-error {unknown type name 'vfloat32mf2_t'} } */
+}
+
+void valid_type ()
+{
+  vint8m1_t v1;
+  vint8m2_t v2;
+  vint8m4_t v3;
+  vint8m8_t v4;
+  vint16m1_t v5;
+  vint16m2_t v6;
+  vint16m4_t v7;
+  vint16m8_t v8;
+  vint32m1_t v9;
+  vint32m2_t v10;
+  vint32m4_t v11;
+  vint32m8_t v12;
+  vint64m1_t v13;
+  vint64m2_t v14;
+  vint64m4_t v15;
+  vint64m8_t v16;
+  vuint8m1_t v17;
+  vuint8m2_t v18;
+  vuint8m4_t v19;
+  vuint8m8_t v20;
+  vuint16m1_t v21;
+  vuint16m2_t v22;
+  vuint16m4_t v23;
+  vuint16m8_t v24;
+  vuint32m1_t v25;
+  vuint32m2_t v26;
+  vuint32m4_t v27;
+  vuint32m8_t v28;
+  vuint64m1_t v29;
+  vuint64m2_t v30;
+  vuint64m4_t v31;
+  vuint64m8_t v32;
+  vfloat16m1_t v33;
+  vfloat16m2_t v34;
+  vfloat16m4_t v35;
+  vfloat16m8_t v36;
+  vfloat32m1_t v37;
+  vfloat32m2_t v38;
+  vfloat32m4_t v39;
+  vfloat32m8_t v40;
+  vfloat64m1_t v41;
+  vfloat64m2_t v42;
+  vfloat64m4_t v43;
+  vfloat64m8_t v44;
+
+  vbool1_t v45;
+  vbool2_t v46;
+  vbool4_t v47;
+  vbool8_t v48;
+  vbool16_t v49;
+  vbool32_t v50;
+  vbool64_t v51;
+}
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 1/9] RISC-V: minimal support for xtheadvector
  2023-11-18  4:26 ` [PATCH v2 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
@ 2023-11-18 10:06   ` Kito Cheng
  0 siblings, 0 replies; 69+ messages in thread
From: Kito Cheng @ 2023-11-18 10:06 UTC (permalink / raw)
  To: Jun Sha (Joshua)
  Cc: gcc-patches, jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	jeffreyalaw, christoph.muellner

On Sat, Nov 18, 2023 at 12:27 PM Jun Sha (Joshua)
<cooper.joshua@linux.alibaba.com> wrote:
>
> This patch is to introduce basic XTheadVector support
> (march string parsing and a test for __riscv_xtheadvector)
> according to https://github.com/T-head-Semi/thead-extension-spec/
>
> Contributors:
>         Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
>         Jin Ma <jinma@linux.alibaba.com>
>         Christoph Müllner <christoph.muellner@vrull.eu>
>
> gcc/ChangeLog:
>
>         * common/config/riscv/riscv-common.cc
>         (riscv_subset_list::parse): : Add new vendor extension.
>         * config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):
>         Add test marco.
>         * config/riscv/riscv.opt: Add new mask.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.target/riscv/predef-__riscv_th_v_intrinsic.c: New test.
>         * gcc.target/riscv/rvv/xtheadvector.c: New test.
> ---
>  gcc/common/config/riscv/riscv-common.cc             | 10 ++++++++++
>  gcc/config/riscv/riscv-c.cc                         |  4 ++++
>  gcc/config/riscv/riscv.opt                          |  2 ++
>  .../riscv/predef-__riscv_th_v_intrinsic.c           | 11 +++++++++++
>  gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c   | 13 +++++++++++++
>  5 files changed, 40 insertions(+)
>  create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
>  create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
>
> diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
> index 526dbb7603b..914924171fd 100644
> --- a/gcc/common/config/riscv/riscv-common.cc
> +++ b/gcc/common/config/riscv/riscv-common.cc
> @@ -75,6 +75,8 @@ static const riscv_implied_info_t riscv_implied_info[] =
>
>    {"v", "zvl128b"},
>    {"v", "zve64d"},
> +  {"xtheadvector", "zvl128b"},
> +  {"xtheadvector", "zve64d"},

^^^ don't imply zve64d, it will mix V 1.0 together, I know why you
want to do that, so I have given some suggestions below.

>
>    {"zve32f", "f"},
>    {"zve64f", "f"},
> @@ -325,6 +327,7 @@ static const struct riscv_ext_version riscv_ext_version_table[] =
>    {"xtheadmemidx", ISA_SPEC_CLASS_NONE, 1, 0},
>    {"xtheadmempair", ISA_SPEC_CLASS_NONE, 1, 0},
>    {"xtheadsync", ISA_SPEC_CLASS_NONE, 1, 0},
> +  {"xtheadvector", ISA_SPEC_CLASS_NONE, 1, 0},
>
>    {"xventanacondops", ISA_SPEC_CLASS_NONE, 1, 0},
>
> @@ -1495,6 +1498,10 @@ riscv_subset_list::parse (const char *arch, location_t loc)
>      error_at (loc, "%<-march=%s%>: z*inx conflicts with floating-point "
>                    "extensions", arch);
>
> +  if (subset_list->lookup ("v") && subset_list->lookup ("xtheadvector"))
> +    error_at (loc, "%<-march=%s%>: xtheadvector conflicts with vector "
> +                  "extensions", arch);
> +
>    /* 'H' hypervisor extension requires base ISA with 32 registers.  */
>    if (subset_list->lookup ("e") && subset_list->lookup ("h"))
>      error_at (loc, "%<-march=%s%>: h extension requires i extension", arch);
> @@ -1680,6 +1687,9 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
>    {"xtheadmemidx",  &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMIDX},
>    {"xtheadmempair", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMPAIR},
>    {"xtheadsync",    &gcc_options::x_riscv_xthead_subext, MASK_XTHEADSYNC},
> +  {"xtheadvector",  &gcc_options::x_riscv_xthead_subext, MASK_XTHEADVECTOR},
> +  {"xtheadvector",  &gcc_options::x_target_flags, MASK_FULL_V},
> +  {"xtheadvector",  &gcc_options::x_target_flags, MASK_VECTOR},

Add following two line then you don't need zve64d
 {"xtheadvector", &gcc_options::x_riscv_vector_elen_flags, MASK_VECTOR_ELEN_64},
 {"xtheadvector",  &gcc_options::x_riscv_vector_elen_flags,
MASK_VECTOR_ELEN_FP_64},

>
>    {"xventanacondops", &gcc_options::x_riscv_xventana_subext, MASK_XVENTANACONDOPS},
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector
  2023-11-18  4:28 ` [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
@ 2023-11-18 10:13   ` Kito Cheng
  0 siblings, 0 replies; 69+ messages in thread
From: Kito Cheng @ 2023-11-18 10:13 UTC (permalink / raw)
  To: Jun Sha (Joshua)
  Cc: gcc-patches, jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	jeffreyalaw, christoph.muellner

> diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
> new file mode 100644
> index 00000000000..194652032bc
> --- /dev/null
> +++ b/gcc/config/riscv/riscv_th_vector.h
...
> +/* NOTE: This implementation of riscv_vector.h is intentionally short.  It does
> +   not define the RVV types and intrinsic functions directly in C and C++
> +   code, but instead uses the following pragma to tell GCC to insert the
> +   necessary type and function definitions itself.  The net effect is the
> +   same, and the file is a complete implementation of riscv_vector.h.  */
> +#pragma riscv intrinsic "vector"

Plz use #pragma riscv intrinsic "thead_vector"

> @@ -1135,7 +1135,7 @@ (define_expand "@mov<V_FRACT:mode><P:mode>_lra"
>      [(set (match_operand:V_FRACT 0 "reg_or_mem_operand")
>           (match_operand:V_FRACT 1 "reg_or_mem_operand"))
>     (clobber (match_scratch:P 2))])]
> -  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
> +  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"

It's an accident, right?

>  {})
>
>  (define_expand "@mov<VB:mode><P:mode>_lra"
> @@ -1143,14 +1143,14 @@ (define_expand "@mov<VB:mode><P:mode>_lra"
>      [(set (match_operand:VB 0 "reg_or_mem_operand")
>           (match_operand:VB 1 "reg_or_mem_operand"))
>     (clobber (match_scratch:P 2))])]
> -  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
> +  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"

Ditto.

>  {})
>
>  (define_insn_and_split "*mov<V_FRACT:mode><P:mode>_lra"
>    [(set (match_operand:V_FRACT 0 "reg_or_mem_operand" "=vr, m,vr")
>         (match_operand:V_FRACT 1 "reg_or_mem_operand" "  m,vr,vr"))
>     (clobber (match_scratch:P 2 "=&r,&r,X"))]
> -  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
> +  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"

Ditto.

>    "#"
>    "&& reload_completed"
>    [(const_int 0)]
> @@ -1172,7 +1172,7 @@ (define_insn_and_split "*mov<VB:mode><P:mode>_lra"
>    [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr, m,vr")
>         (match_operand:VB 1 "reg_or_mem_operand" "  m,vr,vr"))
>     (clobber (match_scratch:P 2 "=&r,&r,X"))]
> -  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
> +  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"

Ditto.

>    "#"
>    "&& reload_completed"
>    [(const_int 0)]
> @@ -1286,14 +1286,14 @@ (define_expand "@mov<VLS_AVL_REG:mode><P:mode>_lra"
>      [(set (match_operand:VLS_AVL_REG 0 "reg_or_mem_operand")
>           (match_operand:VLS_AVL_REG 1 "reg_or_mem_operand"))
>     (clobber (match_scratch:P 2))])]
> -  "TARGET_VECTOR && (lra_in_progress || reload_completed)"
> +  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)"

Ditto.

>  {})
>
>  (define_insn_and_split "*mov<VLS_AVL_REG:mode><P:mode>_lra"
>    [(set (match_operand:VLS_AVL_REG 0 "reg_or_mem_operand" "=vr, m,vr")
>         (match_operand:VLS_AVL_REG 1 "reg_or_mem_operand" "  m,vr,vr"))
>     (clobber (match_scratch:P 2 "=&r,&r,X"))]
> -  "TARGET_VECTOR && (lra_in_progress || reload_completed)
> +  "TARGET_VECTOR &&  (lra_in_progress || reload_completed)

Ditto.

>     && (register_operand (operands[0], <VLS_AVL_REG:MODE>mode)
>         || register_operand (operands[1], <VLS_AVL_REG:MODE>mode))"
>    "#"
> @@ -1359,7 +1359,7 @@ (define_expand "movmisalign<mode>"
>  (define_expand "movmisalign<mode>"
>    [(set (match_operand:V 0 "nonimmediate_operand")
>         (match_operand:V 1 "general_operand"))]
> -  "TARGET_VECTOR && TARGET_VECTOR_MISALIGN_SUPPORTED"
> +  "TARGET_VECTOR &&  TARGET_VECTOR_MISALIGN_SUPPORTED"

Ditto.

>    {
>      emit_move_insn (operands[0], operands[1]);
>      DONE;
> @@ -1396,7 +1396,7 @@ (define_insn_and_split "*vec_duplicate<mode>"
>    [(set (match_operand:V_VLS 0 "register_operand")
>          (vec_duplicate:V_VLS
>            (match_operand:<VEL> 1 "direct_broadcast_operand")))]
> -  "TARGET_VECTOR && can_create_pseudo_p ()"
> +  "TARGET_VECTOR &&  can_create_pseudo_p ()"

Ditto.

>    "#"
>    "&& 1"
>    [(const_int 0)]

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
                   ` (7 preceding siblings ...)
  2023-11-18  4:39 ` [PATCH v2 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
@ 2023-12-20 12:20 ` Jun Sha (Joshua)
  2023-12-20 12:25   ` [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc Jun Sha (Joshua)
                     ` (7 more replies)
  8 siblings, 8 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:20 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch series presents gcc implementation of the XTheadVector
extension [1].

[1] https://github.com/T-head-Semi/thead-extension-spec/

For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.

For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.

We have run the GCC test suite and can confirm that there
are no regressions.

All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html

Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html

Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples, 
and all the tests passed.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>

RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics

---
 gcc/common/config/riscv/riscv-common.cc       |   23 +
 gcc/config.gcc                                |    4 +-
 gcc/config/riscv/autovec.md                   |    2 +-
 gcc/config/riscv/predicates.md                |    8 +-
 gcc/config/riscv/riscv-c.cc                   |    8 +-
 gcc/config/riscv/riscv-protos.h               |    1 +
 gcc/config/riscv/riscv-string.cc              |    3 +
 gcc/config/riscv/riscv-v.cc                   |   13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   18 +-
 .../riscv/riscv-vector-builtins-bases.h       |   19 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  149 +
 .../riscv/riscv-vector-builtins-shapes.h      |    3 +
 .../riscv/riscv-vector-builtins-types.def     |  120 +
 gcc/config/riscv/riscv-vector-builtins.cc     |  315 +-
 gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
 gcc/config/riscv/riscv-vector-switch.def      |  150 +-
 gcc/config/riscv/riscv.cc                     |   46 +-
 gcc/config/riscv/riscv.h                      |    4 +
 gcc/config/riscv/riscv.opt                    |    2 +
 gcc/config/riscv/riscv_th_vector.h            |   49 +
 gcc/config/riscv/t-riscv                      |   16 +
 .../riscv/thead-vector-builtins-functions.def |  659 ++++
 gcc/config/riscv/thead-vector-builtins.cc     |  887 ++++++
 gcc/config/riscv/thead-vector-builtins.h      |  123 +
 gcc/config/riscv/thead-vector.md              | 2827 +++++++++++++++++
 gcc/config/riscv/vector-iterators.md          |  186 +-
 gcc/config/riscv/vector.md                    |   44 +-
 .../riscv/predef-__riscv_th_v_intrinsic.c     |   11 +
 .../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
 .../gcc.target/riscv/rvv/xtheadvector.c       |   13 +
 .../riscv/rvv/xtheadvector/prefix.c           |   12 +
 .../riscv/rvv/xtheadvector/vlb-vsb.c          |   68 +
 .../riscv/rvv/xtheadvector/vlbu-vsb.c         |   68 +
 .../riscv/rvv/xtheadvector/vlh-vsh.c          |   68 +
 .../riscv/rvv/xtheadvector/vlhu-vsh.c         |   68 +
 .../riscv/rvv/xtheadvector/vlw-vsw.c          |   68 +
 .../riscv/rvv/xtheadvector/vlwu-vsw.c         |   68 +
 gcc/testsuite/lib/target-supports.exp         |   12 +
 39 files changed, 5931 insertions(+), 213 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
@ 2023-12-20 12:25   ` Jun Sha (Joshua)
  2023-12-20 18:14     ` Jeff Law
  2023-12-20 12:27   ` [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns Jun Sha (Joshua)
                     ` (6 subsequent siblings)
  7 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:25 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch moves the definition of the enums lst_type and
frm_op_type into riscv-vector-builtins-bases.h and removes
the static visibility of fold_fault_load(), so these
can be used in other compile units.

gcc/ChangeLog:

	* config/riscv/riscv-vector-builtins-bases.cc (enum lst_type):
	(enum frm_op_type): move to riscv-vector-builtins-bases.h
	* config/riscv/riscv-vector-builtins-bases.h
	(GCC_RISCV_VECTOR_BUILTINS_BASES_H): Add header files.
	(enum lst_type): move from
	(enum frm_op_type): riscv-vector-builtins-bases.cc
	(fold_fault_load): riscv-vector-builtins-bases.cc

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 .../riscv/riscv-vector-builtins-bases.cc      | 18 +-----------------
 .../riscv/riscv-vector-builtins-bases.h       | 19 +++++++++++++++++++
 2 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index d70468542ee..c51affde353 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -48,24 +48,8 @@ using namespace riscv_vector;
 
 namespace riscv_vector {
 
-/* Enumerates types of loads/stores operations.
-   It's only used in here so we don't define it
-   in riscv-vector-builtins-bases.h.  */
-enum lst_type
-{
-  LST_UNIT_STRIDE,
-  LST_STRIDED,
-  LST_INDEXED,
-};
-
-enum frm_op_type
-{
-  NO_FRM,
-  HAS_FRM,
-};
-
 /* Helper function to fold vleff and vlsegff.  */
-static gimple *
+gimple *
 fold_fault_load (gimple_folder &f)
 {
   /* fold fault_load (const *base, size_t *new_vl, size_t vl)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 131041ea66f..42d0cd17dc1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -21,8 +21,27 @@
 #ifndef GCC_RISCV_VECTOR_BUILTINS_BASES_H
 #define GCC_RISCV_VECTOR_BUILTINS_BASES_H
 
+#include "gimple.h"
+#include "riscv-vector-builtins.h"
+
 namespace riscv_vector {
 
+/* Enumerates types of loads/stores operations.  */
+enum lst_type
+{
+  LST_UNIT_STRIDE,
+  LST_STRIDED,
+  LST_INDEXED,
+};
+
+enum frm_op_type
+{
+  NO_FRM,
+  HAS_FRM,
+};
+
+extern gimple *fold_fault_load (gimple_folder &f);
+
 namespace bases {
 extern const function_base *const vsetvl;
 extern const function_base *const vsetvlmax;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns.
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
  2023-12-20 12:25   ` [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc Jun Sha (Joshua)
@ 2023-12-20 12:27   ` Jun Sha (Joshua)
  2023-12-20 18:16     ` Jeff Law
  2023-12-20 12:30   ` [PATCH v3 3/6] RISC-V: Introduce XTheadVector as a subset of V1.0.0 Jun Sha (Joshua)
                     ` (5 subsequent siblings)
  7 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:27 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch splits the definition of csr_operand in predicates.md.
The newly defined vector_csr_operand has the same functionality
as csr_operand but can only be used in vector patterns, so that
changes for vector will not affect scalar patterns in files
like riscv.md.

gcc/ChangeLog:

	* config/riscv/predicates.md (vector_csr_operand):
	Define vector_csr_opeand for vector.
	* config/riscv/vector.md:
	Use newly defined csr_operand for vector.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config/riscv/predicates.md | 4 ++++
 gcc/config/riscv/vector.md     | 8 ++++----
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 6bf6e186641..1a3a4f1ecbb 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -63,6 +63,10 @@ (define_predicate "csr_operand"
   (ior (match_operand 0 "const_csr_operand")
        (match_operand 0 "register_operand")))
 
+(define_predicate "vector_csr_operand"
+  (ior (match_operand 0 "const_csr_operand")
+       (match_operand 0 "register_operand")))
+
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
 (define_predicate "vector_scalar_shift_operand"
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index f607d768b26..036b2425f32 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -1496,7 +1496,7 @@ (define_insn_and_split "*vec_duplicate<mode>"
 
 (define_insn "@vsetvl<mode>"
   [(set (match_operand:P 0 "register_operand" "=r")
-	(unspec:P [(match_operand:P 1 "csr_operand" "rK")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
 		   (match_operand 2 "const_int_operand" "i")
 		   (match_operand 3 "const_int_operand" "i")
 		   (match_operand 4 "const_int_operand" "i")
@@ -1542,7 +1542,7 @@ (define_insn "vsetvl_vtype_change_only"
 ;; in vsetvl instruction pattern.
 (define_insn "@vsetvl_discard_result<mode>"
   [(set (reg:SI VL_REGNUM)
-	(unspec:SI [(match_operand:P 0 "csr_operand" "rK")
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
 		    (match_operand 1 "const_int_operand" "i")
 		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
    (set (reg:SI VTYPE_REGNUM)
@@ -1564,7 +1564,7 @@ (define_insn "@vsetvl_discard_result<mode>"
 ;; such pattern can allow us gain benefits of these optimizations.
 (define_insn_and_split "@vsetvl<mode>_no_side_effects"
   [(set (match_operand:P 0 "register_operand" "=r")
-	(unspec:P [(match_operand:P 1 "csr_operand" "rK")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
 		   (match_operand 2 "const_int_operand" "i")
 		   (match_operand 3 "const_int_operand" "i")
 		   (match_operand 4 "const_int_operand" "i")
@@ -1608,7 +1608,7 @@ (define_insn_and_split "*vsetvldi_no_side_effects_si_extend"
   [(set (match_operand:DI 0 "register_operand")
         (sign_extend:DI
           (subreg:SI
-	    (unspec:DI [(match_operand:P 1 "csr_operand")
+	    (unspec:DI [(match_operand:P 1 "vector_csr_operand")
 		        (match_operand 2 "const_int_operand")
 		        (match_operand 3 "const_int_operand")
 		        (match_operand 4 "const_int_operand")
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 3/6] RISC-V: Introduce XTheadVector as a subset of V1.0.0
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
  2023-12-20 12:25   ` [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc Jun Sha (Joshua)
  2023-12-20 12:27   ` [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns Jun Sha (Joshua)
@ 2023-12-20 12:30   ` Jun Sha (Joshua)
  2023-12-20 12:32   ` [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector Jun Sha (Joshua)
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:30 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch is to introduce basic XTheadVector support
(march string parsing and a test for __riscv_xtheadvector)
according to https://github.com/T-head-Semi/thead-extension-spec/

gcc/ChangeLog:

	* common/config/riscv/riscv-common.cc
	(riscv_subset_list::parse): Add new vendor extension.
	* config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):
	Add test marco.
	* config/riscv/riscv.opt:  Add new mask.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/predef-__riscv_th_v_intrinsic.c: New test.
	* gcc.target/riscv/rvv/xtheadvector.c: New test.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/common/config/riscv/riscv-common.cc       | 23 +++++++++++++++++++
 gcc/config/riscv/riscv-c.cc                   |  8 +++++--
 gcc/config/riscv/riscv.opt                    |  2 ++
 .../riscv/predef-__riscv_th_v_intrinsic.c     | 11 +++++++++
 .../gcc.target/riscv/rvv/xtheadvector.c       | 13 +++++++++++
 5 files changed, 55 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c

diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index f20d179568d..66b20c154a9 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -368,6 +368,7 @@ static const struct riscv_ext_version riscv_ext_version_table[] =
   {"xtheadmemidx", ISA_SPEC_CLASS_NONE, 1, 0},
   {"xtheadmempair", ISA_SPEC_CLASS_NONE, 1, 0},
   {"xtheadsync", ISA_SPEC_CLASS_NONE, 1, 0},
+  {"xtheadvector", ISA_SPEC_CLASS_NONE, 1, 0},
 
   {"xventanacondops", ISA_SPEC_CLASS_NONE, 1, 0},
 
@@ -1251,6 +1252,15 @@ riscv_subset_list::check_conflict_ext ()
       if (lookup ("zcmp"))
 	error_at (m_loc, "%<-march=%s%>: zcd conflicts with zcmp", m_arch);
     }
+
+  if ((lookup ("v") || lookup ("zve32x")
+	 || lookup ("zve64x") || lookup ("zve32f")
+	 || lookup ("zve64f") || lookup ("zve64d")
+	 || lookup ("zvl32b") || lookup ("zvl64b")
+	 || lookup ("zvl128b") || lookup ("zvfh"))
+	 && lookup ("xtheadvector"))
+    error_at (m_loc, "%<-march=%s%>: xtheadvector conflicts with vector "
+		   "extension or its sub-extensions", m_arch);
 }
 
 /* Parsing function for multi-letter extensions.
@@ -1743,6 +1753,19 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
   {"xtheadmemidx",  &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMIDX},
   {"xtheadmempair", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMPAIR},
   {"xtheadsync",    &gcc_options::x_riscv_xthead_subext, MASK_XTHEADSYNC},
+  {"xtheadvector",  &gcc_options::x_riscv_xthead_subext, MASK_XTHEADVECTOR},
+  {"xtheadvector",  &gcc_options::x_riscv_vector_elen_flags, MASK_VECTOR_ELEN_32},
+  {"xtheadvector",  &gcc_options::x_riscv_vector_elen_flags, MASK_VECTOR_ELEN_64},
+  {"xtheadvector",  &gcc_options::x_riscv_vector_elen_flags, MASK_VECTOR_ELEN_FP_32},
+  {"xtheadvector",  &gcc_options::x_riscv_vector_elen_flags, MASK_VECTOR_ELEN_FP_64},
+  {"xtheadvector",  &gcc_options::x_riscv_vector_elen_flags, MASK_VECTOR_ELEN_FP_16},
+  {"xtheadvector",  &gcc_options::x_riscv_zvl_flags, MASK_ZVL32B},
+  {"xtheadvector",  &gcc_options::x_riscv_zvl_flags, MASK_ZVL64B},
+  {"xtheadvector",  &gcc_options::x_riscv_zvl_flags, MASK_ZVL128B},
+  {"xtheadvector",  &gcc_options::x_riscv_zf_subext, MASK_ZVFHMIN},
+  {"xtheadvector",  &gcc_options::x_riscv_zf_subext, MASK_ZVFH},
+  {"xtheadvector",  &gcc_options::x_target_flags, MASK_FULL_V},
+  {"xtheadvector",  &gcc_options::x_target_flags, MASK_VECTOR},
 
   {"xventanacondops", &gcc_options::x_riscv_xventana_subext, MASK_XVENTANACONDOPS},
 
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index d70eb8ed361..d7c63ead147 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -138,6 +138,10 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
 				     riscv_ext_version_value (0, 11));
     }
 
+   if (TARGET_XTHEADVECTOR)
+     builtin_define_with_int_value ("__riscv_th_v_intrinsic",
+				     riscv_ext_version_value (0, 11));
+
   /* Define architecture extension test macros.  */
   builtin_define_with_int_value ("__riscv_arch_test", 1);
 
@@ -191,8 +195,8 @@ riscv_pragma_intrinsic (cpp_reader *)
     {
       if (!TARGET_VECTOR)
 	{
-	  error ("%<#pragma riscv intrinsic%> option %qs needs 'V' extension "
-		 "enabled",
+	  error ("%<#pragma riscv intrinsic%> option %qs needs 'V' or "
+		 "'XTHEADVECTOR' extension enabled",
 		 name);
 	  return;
 	}
diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt
index ede2d655e73..7de5f18e11b 100644
--- a/gcc/config/riscv/riscv.opt
+++ b/gcc/config/riscv/riscv.opt
@@ -449,6 +449,8 @@ Mask(XTHEADMEMPAIR) Var(riscv_xthead_subext)
 
 Mask(XTHEADSYNC)    Var(riscv_xthead_subext)
 
+Mask(XTHEADVECTOR)  Var(riscv_xthead_subext)
+
 TargetVariable
 int riscv_xventana_subext
 
diff --git a/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c b/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
new file mode 100644
index 00000000000..1c764241db6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64imafdcxtheadvector -mabi=lp64d" } */
+
+int main () {
+
+#if __riscv_th_v_intrinsic != 11000
+#error "__riscv_th_v_intrinsic"
+#endif
+
+  return 0;
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
new file mode 100644
index 00000000000..d52921e1314
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector" { target { rv32 } } } */
+/* { dg-options "-march=rv64gc_xtheadvector" { target { rv64 } } } */
+
+#ifndef __riscv_xtheadvector
+#error "Feature macro not defined"
+#endif
+
+int
+foo (int a)
+{
+  return a;
+}
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
                     ` (2 preceding siblings ...)
  2023-12-20 12:30   ` [PATCH v3 3/6] RISC-V: Introduce XTheadVector as a subset of V1.0.0 Jun Sha (Joshua)
@ 2023-12-20 12:32   ` Jun Sha (Joshua)
  2023-12-20 18:22     ` Jeff Law
  2023-12-25  6:25     ` [PATCH v4 " Jun Sha (Joshua)
  2023-12-20 12:34   ` [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector Jun Sha (Joshua)
                     ` (3 subsequent siblings)
  7 siblings, 2 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:32 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions.

gcc/ChangeLog:

	* config/riscv/riscv-protos.h
	(riscv_asm_output_opcode): New function.
	* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
	* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config/riscv/riscv-protos.h               |  1 +
 gcc/config/riscv/riscv.cc                     | 26 +++++++++++++++++++
 gcc/config/riscv/riscv.h                      |  4 +++
 .../riscv/rvv/xtheadvector/prefix.c           | 12 +++++++++
 4 files changed, 43 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index eaee53ce94e..f0eee71a18a 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -101,6 +101,7 @@ struct riscv_address_info {
 };
 
 /* Routines implemented in riscv.cc.  */
+extern void riscv_asm_output_opcode(FILE *asm_out_file, const char *p);
 extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
 extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
 extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 8ae65760b6e..d3010bed8d8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5595,6 +5595,32 @@ riscv_get_v_regno_alignment (machine_mode mode)
   return lmul;
 }
 
+void
+riscv_asm_output_opcode(FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return;
+
+  if (current_output_insn == NULL_RTX)
+    return;
+
+  /* We need to handle the 'vset' special case here since it cannot
+     be controlled by vector mode. */
+  if (!strncmp (p, "vset", 4))
+    {
+      fputs ("th.", asm_out_file);
+      return;
+    }
+
+  subrtx_iterator::array_type array;
+  FOR_EACH_SUBRTX (iter, array, PATTERN (current_output_insn), ALL)
+    if (*iter && riscv_v_ext_mode_p (GET_MODE (*iter)) && p[0] == 'v')
+      {
+	fputs ("th.", asm_out_file);
+	return;
+      }
+}
+
 /* Implement TARGET_PRINT_OPERAND.  The RISCV-specific operand codes are:
 
    'h'	Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..7bb9c9ee408 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
       asm_fprintf ((FILE), "%U%s", (NAME));				\
   } while (0)
 
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR)	\
+  riscv_asm_output_opcode(STREAM, PTR)
+
 #define JUMP_TABLES_IN_TEXT_SECTION 0
 #define CASE_VECTOR_MODE SImode
 #define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+  return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
                     ` (3 preceding siblings ...)
  2023-12-20 12:32   ` [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector Jun Sha (Joshua)
@ 2023-12-20 12:34   ` Jun Sha (Joshua)
  2023-12-20 14:00     ` 钟居哲
  2023-12-25  6:29     ` [PATCH v4 " Jun Sha (Joshua)
  2023-12-20 12:36   ` [PATCH v3 6/6] RISC-V: Add support for xtheadvector-specific intrinsics Jun Sha (Joshua)
                     ` (2 subsequent siblings)
  7 siblings, 2 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:34 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch is to handle the differences in instruction generation
between Vector and XTheadVector, adding th. prefix
to all XTheadVector instructions is not included.

For some vector patterns that cannot be avoided, we use
!TARGET_XTHEADVECTOR to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.

gcc/ChangeLog:

	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/t-riscv: Add new files.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector-builtins-functions.def: New file.
	* config/riscv/thead-vector-builtins.cc: New file.
	* config/riscv/thead-vector-builtins.h: New file.
	* config/riscv/thead-vector.md: New file.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |    4 +-
 gcc/config/riscv/autovec.md                   |    2 +-
 gcc/config/riscv/predicates.md                |    8 +-
 gcc/config/riscv/riscv-string.cc              |    3 +
 gcc/config/riscv/riscv-v.cc                   |   13 +-
 .../riscv/riscv-vector-builtins-shapes.cc     |   23 +
 gcc/config/riscv/riscv-vector-builtins.cc     |    7 +
 gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
 gcc/config/riscv/riscv-vector-switch.def      |  150 +-
 gcc/config/riscv/riscv.cc                     |   20 +-
 gcc/config/riscv/riscv_th_vector.h            |   49 +
 gcc/config/riscv/t-riscv                      |   16 +
 .../riscv/thead-vector-builtins-functions.def |  627 ++++
 gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++
 gcc/config/riscv/thead-vector-builtins.h      |   92 +
 gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++
 gcc/config/riscv/vector-iterators.md          |  186 +-
 gcc/config/riscv/vector.md                    |   36 +-
 .../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
 gcc/testsuite/lib/target-supports.exp         |   12 +
 21 files changed, 4386 insertions(+), 191 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/config/riscv/thead-vector.md

diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,9 +547,9 @@ riscv*)
 	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
 	extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
-	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+	extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 1a3a4f1ecbb..d910367e59c 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@ (define_predicate "csr_operand"
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 486f5deb296..710332e17db 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..f5f9000d89c 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
 
 using namespace riscv_vector;
 
@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
   {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
 #include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \
+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
 };
 
 /* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..bb463510dd2 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
   ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */
   ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */
   ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */
+  XTHEADVECTOR_EXT,   /* XTheadVector extension */
 };
 
 /* Enumerates the RVV operand types.  */
@@ -233,7 +234,7 @@ struct function_group_info
     switch (ext_value)
     {
       case VECTOR_EXT:
-        return TARGET_VECTOR;
+	return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);
       case ZVBB_EXT:
         return TARGET_ZVBB;
       case ZVBB_OR_ZVKB_EXT:
@@ -252,6 +253,8 @@ struct function_group_info
         return TARGET_ZVKSED;
       case ZVKSH_EXT:
         return TARGET_ZVKSH;
+      case XTHEADVECTOR_EXT:
+	return TARGET_XTHEADVECTOR;
       default:
         gcc_unreachable ();
     }
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index d3010bed8d8..18cc64b63e6 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
   $(RISCV_BUILTINS_H)
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
 
+thead-vector-builtins.o: \
+  $(srcdir)/config/riscv/thead-vector-builtins.cc \
+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+  rtx-vector-builder.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(RISCV_BUILTINS_H)
+	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+		$(srcdir)/config/riscv/thead-vector-builtins.cc
+
 riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
   $(SYSTEM_H) $(TM_H)
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..a85ca24cb31
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,627 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#ifndef DEF_THEAD_RVV_FUNCTION
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+/* Internal helper functions for gimple fold use.  */
+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)
+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)
+
+/* 6. Configuration-Setting Instructions.  */
+
+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)
+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+
+/* 7. Vector Loads and Stores. */
+
+// 7.4. Vector Unit-Stride Instructions
+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+
+// 7.5. Vector Strided Instructions
+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
+
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+
+// 7.7. Unit-stride Fault-Only-First Loads
+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)
+
+// TODO: 7.8. Vector Load/Store Segment Instructions
+
+/* 11. Vector Integer Arithmetic Instructions.  */
+
+// 11.1. Vector Single-Width Integer Add and Subtract
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)
+
+// 11.2. Vector Widening Integer Add/Subtract
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)
+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)
+
+// 11.3. Vector Integer Extension
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)
+
+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)
+
+// 11.5. Vector Bitwise Logical Instructions
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)
+
+// 11.6. Vector Single-Width Shift Instructions
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)
+
+// 11.7. Vector Narrowing Integer Right Shift Instructions
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)
+
+// 11.8. Vector Integer Compare Instructions
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)
+
+// 11.9. Vector Integer Min/Max Instructions
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)
+
+// 11.10. Vector Single-Width Integer Multiply Instructions
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)
+
+// 11.11. Vector Integer Divide Instructions
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)
+
+// 11.12. Vector Widening Integer Multiply Instructions
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)
+
+// 11.13. Vector Single-Width Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)
+
+// 11.14. Vector Widening Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)
+
+// 11.15. Vector Integer Merge Instructions
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+
+// 11.16 Vector Integer Move Instructions
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)
+
+/* 12. Vector Fixed-Point Arithmetic Instructions. */
+
+// 12.1. Vector Single-Width Saturating Add and Subtract
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)
+
+// 12.2. Vector Single-Width Averaging Add and Subtract
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)
+
+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)
+
+// 12.4. Vector Single-Width Scaling Shift Instructions
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)
+
+// 12.5. Vector Narrowing Fixed-Point Clip Instructions
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+
+/* 13. Vector Floating-Point Instructions.  */
+
+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)
+
+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.5. Vector Widening Floating-Point Multiply
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)
+
+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)
+
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+
+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)
+
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+
+// 13.8. Vector Floating-Point Square-Root Instruction
+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)
+
+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.11. Vector Floating-Point MIN/MAX Instructions
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)
+
+// 13.12. Vector Floating-Point Sign-Injection Instructions
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)
+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)
+
+// 13.13. Vector Floating-Point Compare Instructions
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)
+
+// 13.14. Vector Floating-Point Classify Instruction
+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)
+
+// 13.15. Vector Floating-Point Merge Instruction
+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)
+
+// 13.16. Vector Floating-Point Move Instruction
+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)
+
+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)
+
+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)
+
+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)
+
+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)
+
+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)
+
+/* 14. Vector Reduction Operations.  */
+
+// 14.1. Vector Single-Width Integer Reduction Instructions
+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)
+
+// 14.2. Vector Widening Integer Reduction Instructions
+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)
+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)
+
+// 14.3. Vector Single-Width Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+
+// 14.4. Vector Widening Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+
+/* 15. Vector Mask Instructions.  */
+
+// 15.1. Vector Mask-Register Logical Instructions
+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)
+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)
+// 15.2. Vector count population in mask vcpop.m
+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)
+// 15.3. vfirst find-first-set mask bit
+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)
+// 15.4. vmsbf.m set-before-first mask bit
+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.5. vmsif.m set-including-first mask bit
+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.6. vmsof.m set-only-first mask bit
+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.8. Vector Iota Instruction
+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)
+// 15.9. Vector Element Index Instruction
+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)
+
+/* 16. Vector Permutation Instructions.  */
+
+// 16.1. Integer Scalar Move Instructions
+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)
+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)
+
+// 16.2. Floating-Point Scalar Move Instructions
+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)
+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)
+
+// 16.3. Vector Slide Instructions
+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)
+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)
+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)
+
+// 16.4. Vector Register Gather Instructions
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)
+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)
+
+// 16.5. Vector Compress Instruction
+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)
+
+/* Miscellaneous Vector Functions.  */
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)
+
+// Tuple types
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)
+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
+#undef DEF_THEAD_RVV_FUNCTION
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..9d84ed39937
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,746 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */
+template<bool VLMAX_P>
+class th_vsetvl : public function_base
+{
+public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (VLMAX_P)
+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));
+    else
+      e.add_input_operand (0);
+
+    tree type = builtin_types[e.type.index].vector;
+    machine_mode mode = TYPE_MODE (type);
+
+    machine_mode inner_mode = GET_MODE_INNER (mode);
+    /* SEW.  */
+    e.add_input_operand (Pmode,
+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));
+
+    /* LMUL.  */
+    e.add_input_operand (Pmode,
+      gen_int_mode (get_vlmul (mode), Pmode));
+
+    /* TAIL_ANY.  */
+    e.add_input_operand (Pmode,
+			 gen_int_mode (get_prefer_tail_policy (), Pmode));
+
+    /* MASK_ANY.  */
+    e.add_input_operand (Pmode,
+			 gen_int_mode (get_prefer_mask_policy (), Pmode));
+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+  }
+};
+
+/* Implements
+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v
+ * codegen.  */
+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>
+class th_loadstore : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return !STORE_P; }
+  bool apply_mask_policy_p () const override { return !STORE_P; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P || LST_TYPE == LST_INDEXED)
+      return true;
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (LST_TYPE == LST_INDEXED)
+      {
+	int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;
+	if (STORE_P)
+	    return e.use_exact_insn (
+	      code_for_pred_th_indexed_store (unspec, e.vector_mode (),
+					      e.index_mode ()));
+	else
+	  {
+	    unsigned src_eew_bitsize
+	      = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));
+	    unsigned dst_eew_bitsize
+	      = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));
+	    if (dst_eew_bitsize == src_eew_bitsize)
+	      {
+		return e.use_exact_insn (
+		  code_for_pred_th_indexed_load_same_eew (
+		    unspec, e.vector_mode ()));
+	      }
+	    else if (dst_eew_bitsize > src_eew_bitsize)
+	      {
+		unsigned factor = dst_eew_bitsize / src_eew_bitsize;
+		switch (factor)
+		  {
+		  case 2:
+		    return e.use_exact_insn (
+		      code_for_pred_th_indexed_load_x2_greater_eew (
+			unspec, e.vector_mode ()));
+		  case 4:
+		    return e.use_exact_insn (
+		      code_for_pred_th_indexed_load_x4_greater_eew (
+			unspec, e.vector_mode ()));
+		  case 8:
+		    return e.use_exact_insn (
+		      code_for_pred_th_indexed_load_x8_greater_eew (
+			unspec, e.vector_mode ()));
+		  default:
+		    gcc_unreachable ();
+		  }
+	      }
+	    else
+	      {
+		unsigned factor = src_eew_bitsize / dst_eew_bitsize;
+		switch (factor)
+		  {
+		  case 2:
+		    return e.use_exact_insn (
+		      code_for_pred_th_indexed_load_x2_smaller_eew (
+			unspec, e.vector_mode ()));
+		  case 4:
+		    return e.use_exact_insn (
+		      code_for_pred_th_indexed_load_x4_smaller_eew (
+			unspec, e.vector_mode ()));
+		  case 8:
+		    return e.use_exact_insn (
+		      code_for_pred_th_indexed_load_x8_smaller_eew (
+			unspec, e.vector_mode ()));
+		  default:
+		    gcc_unreachable ();
+		  }
+	      }
+	  }
+      }
+    else if (LST_TYPE == LST_STRIDED)
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_th_strided_store (e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_th_strided_load (e.vector_mode ()));
+      }
+    else
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_th_store (e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov (e.vector_mode ()));
+      }
+  }
+};
+
+/* Implements vneg/vnot.  */
+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>
+class th_unop : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));
+  }
+};
+
+/* Implements vnsrl/vnsra.  */
+template<rtx_code CODE>
+class th_vnshift : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_wx:
+	return e.use_exact_insn (
+	  code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));
+      case OP_TYPE_wv:
+	return e.use_exact_insn (
+	  code_for_pred_th_narrow (CODE, e.vector_mode ()));
+      default:
+	gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vncvt.  */
+class th_vncvt_x : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_trunc (e.vector_mode ()));
+  }
+};
+
+/* Implements vnclip/vnclipu.  */
+template<int UNSPEC>
+class th_vnclip : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override { return true; }
+
+  bool may_require_vxrm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_wx:
+	return e.use_exact_insn (
+	  code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));
+      case OP_TYPE_wv:
+	return e.use_exact_insn (
+	  code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));
+      default:
+	gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vcpop.  */
+class th_vcpop : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_popcount (e.vector_mode (), Pmode));
+  }
+};
+
+/* Implements vfirst.  */
+class th_vfirst : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_ffs (e.vector_mode (), Pmode));
+  }
+};
+
+/* Implements vmadc.  */
+class th_vmadc : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vvm:
+	return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));
+      case OP_TYPE_vxm:
+	return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));
+      case OP_TYPE_vv:
+	return e.use_exact_insn (
+	  code_for_pred_th_madc_overflow (e.vector_mode ()));
+      case OP_TYPE_vx:
+	return e.use_exact_insn (
+	  code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));
+      default:
+	gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vmsbc.  */
+class th_vmsbc : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vvm:
+	return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));
+      case OP_TYPE_vxm:
+	return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));
+      case OP_TYPE_vv:
+	return e.use_exact_insn (
+	  code_for_pred_th_msbc_overflow (e.vector_mode ()));
+      case OP_TYPE_vx:
+	return e.use_exact_insn (
+	  code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));
+      default:
+	gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vfncvt.x.  */
+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_x : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+  }
+};
+
+template<enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_f : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (e.op_info->op == OP_TYPE_f_w)
+      return e.use_exact_insn (
+	code_for_pred_th_trunc (e.vector_mode ()));
+    if (e.op_info->op == OP_TYPE_x_w)
+      return e.use_exact_insn (
+	code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));
+    if (e.op_info->op == OP_TYPE_xu_w)
+      return e.use_exact_insn (
+	code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));
+    gcc_unreachable ();
+  }
+};
+
+/* Implements floating-point reduction instructions.  */
+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_freducop : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  bool apply_mask_policy_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));
+  }
+};
+
+class th_vleff : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY | CP_WRITE_CSR;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  gimple *fold (gimple_folder &f) const override
+  {
+    return fold_fault_load (f);
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_contiguous_load_insn (
+      code_for_pred_th_fault_load (e.vector_mode ()));
+  }
+};
+
+/* Implements vlseg.v.  */
+class th_vlseg : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_unit_strided_load (e.vector_mode ()));
+  }
+};
+
+/* Implements vsseg.v.  */
+class th_vsseg : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_WRITE_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_unit_strided_store (e.vector_mode ()));
+  }
+};
+
+/* Implements vlsseg.v.  */
+class th_vlsseg : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_strided_load (e.vector_mode ()));
+  }
+};
+
+/* Implements vssseg.v.  */
+class th_vssseg : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_WRITE_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_strided_store (e.vector_mode ()));
+  }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_load : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_indexed_load (
+	UNSPEC, e.vector_mode (), e.index_mode ()));
+  }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_store : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_WRITE_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_indexed_store (
+	UNSPEC, e.vector_mode (), e.index_mode ()));
+  }
+};
+
+/* Implements vlsegff.v.  */
+class th_vlsegff : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY | CP_WRITE_CSR;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  gimple *fold (gimple_folder &f) const override
+  {
+    return fold_fault_load (f);
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_fault_load (e.vector_mode ()));
+  }
+};
+
+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;
+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;
+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;
+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;
+static CONSTEXPR const th_unop<NEG> th_vneg_obj;
+static CONSTEXPR const th_unop<NOT> th_vnot_obj;
+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;
+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;
+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;
+static CONSTEXPR const th_vcpop th_vcpop_obj;
+static CONSTEXPR const th_vfirst th_vfirst_obj;
+static CONSTEXPR const th_vmadc th_vmadc_obj;
+static CONSTEXPR const th_vmsbc th_vmsbc_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;
+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;
+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;
+static CONSTEXPR const th_vleff th_vleff_obj;
+static CONSTEXPR const th_vlseg th_vlseg_obj;
+static CONSTEXPR const th_vsseg th_vsseg_obj;
+static CONSTEXPR const th_vlsseg th_vlsseg_obj;
+static CONSTEXPR const th_vssseg th_vssseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;
+static CONSTEXPR const th_vlsegff th_vlsegff_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+   of class <NAME>_obj.  */
+#define BASE(NAME) \
+  namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vsetvl)
+BASE (th_vsetvlmax)
+BASE (th_vle)
+BASE (th_vse)
+BASE (th_vlm)
+BASE (th_vsm)
+BASE (th_vlse)
+BASE (th_vsse)
+BASE (th_vluxei8)
+BASE (th_vluxei16)
+BASE (th_vluxei32)
+BASE (th_vluxei64)
+BASE (th_vloxei8)
+BASE (th_vloxei16)
+BASE (th_vloxei32)
+BASE (th_vloxei64)
+BASE (th_vsuxei8)
+BASE (th_vsuxei16)
+BASE (th_vsuxei32)
+BASE (th_vsuxei64)
+BASE (th_vsoxei8)
+BASE (th_vsoxei16)
+BASE (th_vsoxei32)
+BASE (th_vsoxei64)
+BASE (th_vneg)
+BASE (th_vnot)
+BASE (th_vnsrl)
+BASE (th_vnsra)
+BASE (th_vncvt_x)
+BASE (th_vnclip)
+BASE (th_vnclipu)
+BASE (th_vcpop)
+BASE (th_vfirst)
+BASE (th_vmadc)
+BASE (th_vmsbc)
+BASE (th_vfncvt_x)
+BASE (th_vfncvt_x_frm)
+BASE (th_vfncvt_xu)
+BASE (th_vfncvt_xu_frm)
+BASE (th_vfncvt_f)
+BASE (th_vfncvt_f_frm)
+BASE (th_vfredusum)
+BASE (th_vfredusum_frm)
+BASE (th_vfredosum)
+BASE (th_vfredosum_frm)
+BASE (th_vfwredusum)
+BASE (th_vfwredusum_frm)
+BASE (th_vfwredosum)
+BASE (th_vfwredosum_frm)
+BASE (th_vleff)
+BASE (th_vlseg)
+BASE (th_vsseg)
+BASE (th_vlsseg)
+BASE (th_vssseg)
+BASE (th_vluxseg)
+BASE (th_vloxseg)
+BASE (th_vsuxseg)
+BASE (th_vsoxseg)
+BASE (th_vlsegff)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..d0bf00b8e81
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,92 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vsetvl;
+extern const function_base *const th_vsetvlmax;
+extern const function_base *const th_vle;
+extern const function_base *const th_vse;
+extern const function_base *const th_vlm;
+extern const function_base *const th_vsm;
+extern const function_base *const th_vlse;
+extern const function_base *const th_vsse;
+extern const function_base *const th_vluxei8;
+extern const function_base *const th_vluxei16;
+extern const function_base *const th_vluxei32;
+extern const function_base *const th_vluxei64;
+extern const function_base *const th_vloxei8;
+extern const function_base *const th_vloxei16;
+extern const function_base *const th_vloxei32;
+extern const function_base *const th_vloxei64;
+extern const function_base *const th_vsuxei8;
+extern const function_base *const th_vsuxei16;
+extern const function_base *const th_vsuxei32;
+extern const function_base *const th_vsuxei64;
+extern const function_base *const th_vsoxei8;
+extern const function_base *const th_vsoxei16;
+extern const function_base *const th_vsoxei32;
+extern const function_base *const th_vsoxei64;
+extern const function_base *const th_vneg;
+extern const function_base *const th_vnot;
+extern const function_base *const th_vnsrl;
+extern const function_base *const th_vnsra;
+extern const function_base *const th_vncvt_x;
+extern const function_base *const th_vnclip;
+extern const function_base *const th_vnclipu;
+extern const function_base *const th_vcpop;
+extern const function_base *const th_vfirst;
+extern const function_base *const th_vmadc;
+extern const function_base *const th_vmsbc;
+extern const function_base *const th_vfncvt_x;
+extern const function_base *const th_vfncvt_x_frm;
+extern const function_base *const th_vfncvt_xu;
+extern const function_base *const th_vfncvt_xu_frm;
+extern const function_base *const th_vfncvt_f;
+extern const function_base *const th_vfncvt_f_frm;
+extern const function_base *const th_vfredusum;
+extern const function_base *const th_vfredusum_frm;
+extern const function_base *const th_vfredosum;
+extern const function_base *const th_vfredosum_frm;
+extern const function_base *const th_vfwredusum;
+extern const function_base *const th_vfwredusum_frm;
+extern const function_base *const th_vfwredosum;
+extern const function_base *const th_vfwredosum_frm;
+extern const function_base *const th_vleff;
+extern const function_base *const th_vlseg;
+extern const function_base *const th_vsseg;
+extern const function_base *const th_vlsseg;
+extern const function_base *const th_vssseg;
+extern const function_base *const th_vluxseg;
+extern const function_base *const th_vloxseg;
+extern const function_base *const th_vsuxseg;
+extern const function_base *const th_vsoxseg;
+extern const function_base *const th_vlsegff;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..072fb5e68e1
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,2574 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_int_attr th_order [
+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")
+])
+
+(define_int_attr th_reduc_op [
+  (UNSPEC_REDUC_SUM "redsum")
+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")
+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")
+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")
+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")
+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")
+])
+
+(define_code_iterator neg_unop [neg])
+(define_code_iterator not_unop [not])
+
+(define_code_iterator any_float_unop_neg [neg])
+(define_code_iterator any_float_unop_abs [abs])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_expand "@pred_th_mov<mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+        [(match_operand:<VM> 1 "vector_mask_operand")
+         (match_operand 4 "vector_length_operand")
+         (match_operand 5 "const_int_operand")
+         (match_operand 6 "const_int_operand")
+         (match_operand 7 "const_int_operand")
+         (reg:SI VL_REGNUM)
+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+      (match_operand:V_VLS 3 "vector_move_operand")
+      (match_operand:V_VLS 2 "vector_merge_operand")))]
+  "TARGET_XTHEADVECTOR"
+  {})
+
+(define_insn_and_split "*pred_broadcast<mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")
+	(if_then_else:V_VLSI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+	     (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")
+	     (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+	     (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+	     (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (vec_duplicate:V_VLSI
+	    (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))
+	  (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.x\t%0,%3
+   vmv.v.x\t%0,%3
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero
+   vlse.v\t%0,%3,zero
+   vmv.s.x\t%0,%3
+   vmv.s.x\t%0,%3"
+  "(register_operand (operands[3], <VEL>mode)
+  || CONST_POLY_INT_P (operands[3]))
+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
+  [(set (match_dup 0)
+	(if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)
+	     (match_dup 5) (match_dup 6) (match_dup 7)
+	     (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (vec_duplicate:V_VLSI (match_dup 3))
+	  (match_dup 2)))]
+  {
+    gcc_assert (can_create_pseudo_p ());
+    if (CONST_POLY_INT_P (operands[3]))
+      {
+	rtx tmp = gen_reg_rtx (<VEL>mode);
+	emit_move_insn (tmp, operands[3]);
+	operands[3] = tmp;
+      }
+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),
+				GET_MODE_ALIGNMENT (<VEL>mode));
+    m = validize_mem (m);
+    emit_move_insn (m, operands[3]);
+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));
+    operands[3] = m;
+
+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:
+       andi a2,a2,1
+       vsetvl zero,a2,e64
+       vlse64.v  */
+    if (satisfies_constraint_Wb1 (operands[1]))
+      {
+	operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);
+	operands[1] = CONSTM1_RTX (<VM>mode);
+      }
+  }
+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_broadcast<mode>"
+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")
+	(if_then_else:V_VLSF_ZVFHMIN
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+	     (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")
+	     (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+	     (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+	     (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (vec_duplicate:V_VLSF_ZVFHMIN
+	    (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))
+	  (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vfmv.v.f\t%0,%3
+   vfmv.v.f\t%0,%3
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero
+   vlse.v\t%0,%3,zero
+   vfmv.s.f\t%0,%3
+   vfmv.s.f\t%0,%3"
+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
+   (set_attr "mode" "<MODE>")])
+
+;; vle.v/vse.v,vmv.v.v
+(define_insn_and_split "*pred_th_mov<mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")
+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")
+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")
+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")
+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")
+         (reg:SI VL_REGNUM)
+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")
+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]
+  "(TARGET_XTHEADVECTOR
+    && (register_operand (operands[0], <MODE>mode)
+        || register_operand (operands[3], <MODE>mode)))"
+  "@
+   vle.v\t%0,%3%p1
+   vle.v\t%0,%3
+   vle.v\t%0,%3,%1.t
+   vse.v\t%3,%0%p1
+   vmv.v.v\t%0,%3
+   vmv.v.v\t%0,%3"
+  "&& register_operand (operands[0], <MODE>mode)
+   && register_operand (operands[3], <MODE>mode)
+   && satisfies_constraint_vu (operands[2])
+   && INTVAL (operands[7]) == riscv_vector::VLMAX"
+  [(set (match_dup 0) (match_dup 3))]
+  ""
+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn_and_split "@pred_th_mov<mode>"
+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")
+	(if_then_else:VB_VLS
+	  (unspec:VB_VLS
+	    [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")
+	     (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")
+	     (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")
+	  (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   #
+   #
+   vmcpy.m\t%0,%3
+   vmclr.m\t%0
+   vmset.m\t%0"
+  "&& !reload_completed"
+  [(const_int 0)]
+  {
+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))
+        || (REG_P (operands[0]) && REG_P (operands[3])
+	    && INTVAL (operands[5]) == riscv_vector::VLMAX))
+      {
+	emit_move_insn (operands[0], operands[3]);
+	DONE;
+      }
+
+    FAIL;
+  }
+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_store<mode>"
+  [(set (match_operand:V 0 "memory_operand"                 "+m")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 3 "vector_length_operand"    "   rK")
+	     (match_operand 4 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operand:V 2 "register_operand"         "    vr")
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vse.v\t%2,%0%p1"
+  [(set_attr "type" "vste")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 4))
+   (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_th_strided_load<mode>"
+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")
+	     (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")
+	     (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)
+	  (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+  vlse.v\t%0,%3,%z4%p1
+  vlse.v\t%0,%3,%z4
+  vlse.v\t%0,%3,%z4,%1.t
+  vle.v\t%0,%3%p1
+  vle.v\t%0,%3
+  vle.v\t%0,%3,%1.t"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK,       rK")
+	     (match_operand 5 "const_int_operand"        "    i,        i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")
+	     (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "@
+  vsse.v\t%3,%0,%z2%p1
+  vse.v\t%3,%0%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+
+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"
+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")
+	     (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")
+	     (match_operand 6 "const_int_operand"         "  i,  i, i,  i")
+	     (match_operand 7 "const_int_operand"         "  i,  i, i,  i")
+	     (match_operand 8 "const_int_operand"         "  i,  i, i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
+	  (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+;; DEST eew is greater than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"
+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")
+	(if_then_else:VEEWEXT2
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  "   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "    i,    i")
+	     (match_operand 7 "const_int_operand"                      "    i,    i")
+	     (match_operand 8 "const_int_operand"                      "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VEEWEXT2
+	    [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)
+	  (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"
+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")
+	(if_then_else:VEEWEXT4
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  "   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "    i,    i")
+	     (match_operand 7 "const_int_operand"                      "    i,    i")
+	     (match_operand 8 "const_int_operand"                      "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VEEWEXT4
+	    [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)
+	  (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"
+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")
+	(if_then_else:VEEWEXT8
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  "   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "    i,    i")
+	     (match_operand 7 "const_int_operand"                      "    i,    i")
+	     (match_operand 8 "const_int_operand"                      "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VEEWEXT8
+	    [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)
+	  (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+;; DEST eew is smaller than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"
+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")
+	(if_then_else:VEEWTRUNC2
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VEEWTRUNC2
+	    [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)
+	  (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"
+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")
+	(if_then_else:VEEWTRUNC4
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VEEWTRUNC4
+	    [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)
+	  (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"
+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")
+	(if_then_else:VEEWTRUNC8
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VEEWTRUNC8
+	    [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)
+	  (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:RATIO64I 2 "register_operand" "  vr")
+	   (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO64:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:RATIO32I 2 "register_operand" "  vr")
+	   (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO32:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:RATIO16I 2 "register_operand" "  vr")
+	   (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO16:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:RATIO8I 2 "register_operand" "  vr")
+	   (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO8:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:RATIO4I 2 "register_operand" "  vr")
+	   (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO4:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")
+	   (match_operand:RATIO2I 2 "register_operand"  "  vr")
+	   (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO2:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")
+	   (match_operand:RATIO1 2 "register_operand"   "  vr")
+	   (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO1:MODE>")])
+
+(define_insn "@pred_th_popcount<VB:mode><P:mode>"
+  [(set (match_operand:P 0 "register_operand"               "=r")
+	(popcount:P
+	  (unspec:VB
+	    [(and:VB
+	       (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+	       (match_operand:VB 2 "register_operand"    "   vr"))
+	     (match_operand 3 "vector_length_operand"    "   rK")
+	     (match_operand 4 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
+  "TARGET_XTHEADVECTOR"
+  "vmpopc.m\t%0,%2%p1"
+  [(set_attr "type" "vmpop")
+   (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_ffs<VB:mode><P:mode>"
+  [(set (match_operand:P 0 "register_operand"                 "=r")
+	(plus:P
+	  (ffs:P
+	    (unspec:VB
+	      [(and:VB
+	         (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+	         (match_operand:VB 2 "register_operand"    "   vr"))
+	       (match_operand 3 "vector_length_operand"    "   rK")
+	       (match_operand 4 "const_int_operand"        "    i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
+	  (const_int -1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmfirst.m\t%0,%2%p1"
+  [(set_attr "type" "vmffs")
+   (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"
+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+	(if_then_else:<VNCONVERT>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+	     (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:<VNCONVERT>
+	     [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)
+	  (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"
+  [(set_attr "type" "vfncvtftoi")
+   (set_attr "mode" "<VNCONVERT>")
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<float_cvt><mode>"
+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+	(if_then_else:<VNCONVERT>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+	     (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (any_float:<VNCONVERT>
+	     (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))
+	  (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vfncvt.f.x<u>.v\t%0,%3%p1"
+  [(set_attr "type" "vfncvtitof")
+   (set_attr "mode" "<VNCONVERT>")
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<optab><mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")
+	(if_then_else:<V_DOUBLE_TRUNC>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (truncate:<V_DOUBLE_TRUNC>
+	    (any_shiftrt:VWEXTI
+	     (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")
+	     (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))
+	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vn<insn>.v%o4\t%0,%3,%v4%p1"
+  [(set_attr "type" "vnshift")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_<optab><mode>_scalar"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+	(if_then_else:<V_DOUBLE_TRUNC>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (truncate:<V_DOUBLE_TRUNC>
+	    (any_shiftrt:VWEXTI
+	     (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")
+	     (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))
+	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vn<insn>.v%o4\t%0,%3,%4%p1"
+  [(set_attr "type" "vnshift")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_trunc<mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+	(if_then_else:<V_DOUBLE_TRUNC>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (truncate:<V_DOUBLE_TRUNC>
+	    (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))
+	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vnsrl.vx\t%0,%3,x0%p1"
+  [(set_attr "type" "vnshift")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_trunc<mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+	(if_then_else:<V_DOUBLE_TRUNC>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (float_truncate:<V_DOUBLE_TRUNC>
+	     (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))
+	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vfncvt.f.f.v\t%0,%3%p1"
+  [(set_attr "type" "vfncvtftof")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_fault_load<mode>"
+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")
+	(if_then_else:V
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")
+	     (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")
+	     (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V
+	    [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)
+	  (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))
+   (set (reg:SI VL_REGNUM)
+	  (unspec:SI
+	    [(if_then_else:V
+	       (unspec:<VM>
+		[(match_dup 1) (match_dup 4) (match_dup 5)
+		 (match_dup 6) (match_dup 7)
+		 (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	       (unspec:V [(match_dup 3)] UNSPEC_VLEFF)
+	       (match_dup 2))] UNSPEC_MODIFY_VL))]
+  "TARGET_XTHEADVECTOR"
+  "vleff.v\t%0,%3%p1"
+  [(set_attr "type" "vldff")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_load<mode>"
+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")
+	(if_then_else:VT
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 5 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VT
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+	     (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
+	  (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vlseg<nf>e.v\t%0,(%z3)%p1"
+  [(set_attr "type" "vlsegde")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_store<mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	      (match_operand 3 "vector_length_operand"    "   rK")
+	      (match_operand 4 "const_int_operand"        "    i")
+	      (reg:SI VL_REGNUM)
+	      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")
+	   (match_operand:VT 2 "register_operand"         "   vr")
+	   (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+  "TARGET_XTHEADVECTOR"
+  "vsseg<nf>e.v\t%2,(%z1)%p0"
+  [(set_attr "type" "vssegte")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_load<mode>"
+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")
+	(if_then_else:VT
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 8 "const_int_operand"        "    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VT
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+	     (mem:BLK (scratch))] UNSPEC_STRIDED)
+	  (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"
+  [(set_attr "type" "vlsegds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	      (match_operand 4 "vector_length_operand"    "   rK")
+	      (match_operand 5 "const_int_operand"        "    i")
+	      (reg:SI VL_REGNUM)
+	      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")
+	   (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")
+	   (match_operand:VT 3 "register_operand"         "   vr")
+	   (mem:BLK (scratch))] UNSPEC_STRIDED))]
+  "TARGET_XTHEADVECTOR"
+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"
+  [(set_attr "type" "vssegts")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_fault_load<mode>"
+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")
+	(if_then_else:VT
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 5 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 6 "const_int_operand"        "    i,     i,     i")
+	     (match_operand 7 "const_int_operand"        "    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:VT
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+	     (mem:BLK (scratch))] UNSPEC_VLEFF)
+	  (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))
+   (set (reg:SI VL_REGNUM)
+        (unspec:SI
+          [(if_then_else:VT
+	     (unspec:<VM>
+	       [(match_dup 1) (match_dup 4) (match_dup 5)
+	        (match_dup 6) (match_dup 7)
+	        (reg:SI VL_REGNUM)
+	        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	     (unspec:VT
+	        [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
+	     (match_dup 2))] UNSPEC_MODIFY_VL))]
+  "TARGET_XTHEADVECTOR"
+  "vlseg<nf>eff.v\t%0,(%z3)%p1"
+  [(set_attr "type" "vlsegdff")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"
+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")
+	(if_then_else:V1T
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (match_operand 8 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V1T
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)
+	  (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"
+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")
+	(if_then_else:V2T
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (match_operand 8 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V2T
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)
+	  (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"
+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")
+	(if_then_else:V4T
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (match_operand 8 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V4T
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)
+	  (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"
+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")
+	(if_then_else:V8T
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (match_operand 8 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V8T
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)
+	  (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"
+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")
+	(if_then_else:V16T
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (match_operand 8 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V16T
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)
+	  (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"
+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")
+	(if_then_else:V32T
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"    "   rK,   rK")
+	     (match_operand 6 "const_int_operand"        "    i,    i")
+	     (match_operand 7 "const_int_operand"        "    i,    i")
+	     (match_operand 8 "const_int_operand"        "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:V32T
+	    [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)
+	  (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+	   (match_operand:RATIO64I 2 "register_operand"       "   vr")
+	   (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+	   (match_operand:RATIO32I 2 "register_operand"       "   vr")
+	   (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+	   (match_operand:RATIO16I 2 "register_operand"       "   vr")
+	   (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+	   (match_operand:RATIO8I 2 "register_operand"       "   vr")
+	   (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+	   (match_operand:RATIO4I 2 "register_operand"      "   vr")
+	   (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"        "    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	   (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+	   (match_operand:RATIO2I 2 "register_operand"      "   vr")
+	   (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+	(if_then_else:V_VLSF
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+	     (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (any_float_unop_neg:V_VLSF
+	    (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vfsgnjn.vv\t%0,%3,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+	(if_then_else:V_VLSF
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+	     (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (any_float_unop_abs:V_VLSF
+	    (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vfsgnjx.vv\t%0,%3,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")
+	(if_then_else:V_VLSI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")
+	     (match_operand 5 "const_int_operand"        " i, i,  i,  i")
+	     (match_operand 6 "const_int_operand"        " i, i,  i,  i")
+	     (match_operand 7 "const_int_operand"        " i, i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (not_unop:V_VLSI
+	    (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))
+	  (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vnot.v\t%0,%3%p1"
+  [(set_attr "type" "vialu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand"	 "=vd,vd, vr, vr")
+	(if_then_else:V_VLSI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")
+	     (match_operand 5 "const_int_operand"	 " i, i,  i,  i")
+	     (match_operand 6 "const_int_operand"	 " i, i,  i,  i")
+	     (match_operand 7 "const_int_operand"	 " i, i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (neg_unop:V_VLSI
+	    (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))
+	  (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vrsub.vx\t%0,%3,x0%p1"
+  [(set_attr "type" "vialu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+	(if_then_else:V_VLSF
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+	     (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (any_float_unop:V_VLSF
+	    (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vf<insn>.v\t%0,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")
+	(if_then_else:<V_DOUBLE_TRUNC>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:<V_DOUBLE_TRUNC>
+	    [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")
+	     (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)
+	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"
+  [(set_attr "type" "vnclip")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+	(if_then_else:<V_DOUBLE_TRUNC>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+	     (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")
+	     (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (unspec:<V_DOUBLE_TRUNC>
+	    [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")
+	     (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)
+	  (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"
+  [(set_attr "type" "vnclip")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; Float Reduction Sum (vfred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")
+	(unspec:<V_LMUL1>
+	  [(unspec:<VM>
+	    [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")
+	     (match_operand               5 "vector_length_operand" "   rK,   rK")
+	     (match_operand               6 "const_int_operand"     "    i,    i")
+	     (match_operand               7 "const_int_operand"     "    i,    i")
+	     (match_operand               8 "const_int_operand"     "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+           (unspec:<V_LMUL1> [
+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")
+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")
+           ] ANY_FREDUC_SUM)
+	   (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
+  "TARGET_XTHEADVECTOR"
+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+  [(set_attr "type" "vfred<order>")
+   (set_attr "mode" "<MODE>")
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")
+	(unspec:<V_EXT_LMUL1>
+	  [(unspec:<VM>
+	    [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")
+	     (match_operand                5 "vector_length_operand" "   rK,   rK")
+	     (match_operand                6 "const_int_operand"     "    i,    i")
+	     (match_operand                7 "const_int_operand"     "    i,    i")
+	     (match_operand                8 "const_int_operand"     "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+           (unspec:<V_EXT_LMUL1> [
+	     (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")
+	     (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")
+           ] ANY_FWREDUC_SUM)
+	   (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
+  "TARGET_XTHEADVECTOR"
+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+  [(set_attr "type" "vfwred<order>")
+   (set_attr "mode" "<MODE>")
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_madc<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")
+	(unspec:<VM>
+	   [(plus:VI
+	     (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")
+	     (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))
+	    (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")
+	       (match_operand 5 "const_int_operand"     "   i,   i,   i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.v%o2m\t%0,%1,%v2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")
+	(unspec:<VM>
+	   [(minus:VI
+	     (match_operand:VI 1 "register_operand"     "  vr")
+	     (match_operand:VI 2 "register_operand"     " vr"))
+	    (match_operand:<VM> 3 "register_operand"    " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand" " rK")
+	       (match_operand 5 "const_int_operand"     "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vvm\t%0,%1,%2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(plus:VI_QHS
+	     (vec_duplicate:VI_QHS
+	       (match_operand:<VEL> 2 "register_operand" "  r"))
+	     (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+	    (match_operand:<VM> 3 "register_operand"     " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand"  " rK")
+	       (match_operand 5 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vxm\t%0,%1,%2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(minus:VI_QHS
+	     (vec_duplicate:VI_QHS
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+	    (match_operand:<VM> 3 "register_operand"     " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand"  " rK")
+	       (match_operand 5 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_madc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+	(unspec:<VM>
+	   [(plus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_int_operand"))
+	     (match_operand:VI_D 1 "register_operand"))
+	    (match_operand:<VM> 3 "register_operand")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand")
+	       (match_operand 5 "const_int_operand")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+	operands,
+	/* scalar op */&operands[2],
+	/* vl */operands[4],
+	<MODE>mode,
+	riscv_vector::simm5_p (operands[2]),
+	[] (rtx *operands, rtx boardcast_scalar) {
+	  emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],
+	       boardcast_scalar, operands[3], operands[4], operands[5]));
+        },
+	(riscv_vector::avl_type) INTVAL (operands[5])))
+    DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(plus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_D 1 "register_operand"    "  vr"))
+	    (match_operand:<VM> 3 "register_operand"     " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand"  " rK")
+	       (match_operand 5 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_madc<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")
+	(unspec:<VM>
+	   [(plus:VI_D
+	     (vec_duplicate:VI_D
+	       (sign_extend:<VEL>
+	         (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+	     (match_operand:VI_D 1 "register_operand"         "  vr"))
+	    (match_operand:<VM> 3 "register_operand"          " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand"       " rK")
+	       (match_operand 5 "const_int_operand"           "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_msbc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+	(unspec:<VM>
+	   [(minus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_int_operand"))
+	     (match_operand:VI_D 1 "register_operand"))
+	    (match_operand:<VM> 3 "register_operand")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand")
+	       (match_operand 5 "const_int_operand")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+	operands,
+	/* scalar op */&operands[2],
+	/* vl */operands[4],
+	<MODE>mode,
+	false,
+	[] (rtx *operands, rtx boardcast_scalar) {
+	  emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],
+	       boardcast_scalar, operands[3], operands[4], operands[5]));
+        },
+	(riscv_vector::avl_type) INTVAL (operands[5])))
+    DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(minus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_D 1 "register_operand"    "  vr"))
+	    (match_operand:<VM> 3 "register_operand"     " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand"  " rK")
+	       (match_operand 5 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_msbc<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")
+	(unspec:<VM>
+	   [(minus:VI_D
+	     (vec_duplicate:VI_D
+	       (sign_extend:<VEL>
+	         (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+	     (match_operand:VI_D 1 "register_operand"         "  vr"))
+	    (match_operand:<VM> 3 "register_operand"          " vm")
+	    (unspec:<VM>
+	      [(match_operand 4 "vector_length_operand"       " rK")
+	       (match_operand 5 "const_int_operand"           "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_overflow"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")
+	(unspec:<VM>
+	   [(plus:VI
+	     (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")
+	     (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")
+	       (match_operand 4 "const_int_operand"     "   i,   i,   i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.v%o2\t%0,%1,%v2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(minus:VI
+	     (match_operand:VI 1 "register_operand"     "   vr")
+	     (match_operand:VI 2 "register_operand"     "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand" "  rK")
+	       (match_operand 4 "const_int_operand"     "   i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vv\t%0,%1,%2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_madc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(plus:VI_QHS
+	     (vec_duplicate:VI_QHS
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand"  " rK")
+	       (match_operand 4 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(minus:VI_QHS
+	     (vec_duplicate:VI_QHS
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand"  " rK")
+	       (match_operand 4 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_madc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+	(unspec:<VM>
+	   [(plus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_int_operand"))
+	     (match_operand:VI_D 1 "register_operand"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand")
+	       (match_operand 4 "const_int_operand")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+	operands,
+	/* scalar op */&operands[2],
+	/* vl */operands[3],
+	<MODE>mode,
+	riscv_vector::simm5_p (operands[2]),
+	[] (rtx *operands, rtx boardcast_scalar) {
+	  emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],
+	       boardcast_scalar, operands[3], operands[4]));
+        },
+	(riscv_vector::avl_type) INTVAL (operands[4])))
+    DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(plus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_D 1 "register_operand"    "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand"  " rK")
+	       (match_operand 4 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")
+	(unspec:<VM>
+	   [(plus:VI_D
+	     (vec_duplicate:VI_D
+	       (sign_extend:<VEL>
+	         (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+	     (match_operand:VI_D 1 "register_operand"         "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand"       " rK")
+	       (match_operand 4 "const_int_operand"           "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_msbc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+	(unspec:<VM>
+	   [(minus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_int_operand"))
+	     (match_operand:VI_D 1 "register_operand"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand")
+	       (match_operand 4 "const_int_operand")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+	operands,
+	/* scalar op */&operands[2],
+	/* vl */operands[3],
+	<MODE>mode,
+	false,
+	[] (rtx *operands, rtx boardcast_scalar) {
+	  emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],
+	       boardcast_scalar, operands[3], operands[4]));
+        },
+	(riscv_vector::avl_type) INTVAL (operands[4])))
+    DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+	(unspec:<VM>
+	   [(minus:VI_D
+	     (vec_duplicate:VI_D
+	       (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+	     (match_operand:VI_D 1 "register_operand"    "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand"  " rK")
+	       (match_operand 4 "const_int_operand"      "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")
+	(unspec:<VM>
+	   [(minus:VI_D
+	     (vec_duplicate:VI_D
+	       (sign_extend:<VEL>
+	         (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+	     (match_operand:VI_D 1 "register_operand"         "  vr"))
+	    (unspec:<VM>
+	      [(match_operand 3 "vector_length_operand"      " rK")
+	       (match_operand 4 "const_int_operand"          "  i")
+	       (reg:SI VL_REGNUM)
+	       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")
+		   (match_operand 4 "const_int_operand" "i")
+		   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)
+		    (match_dup 4)
+		    (match_dup 5)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "*th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")
+	   (match_operand 2 "const_int_operand" "i")
+	   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "*th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_operand 3 "const_int_operand" "i")
+		    (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))
+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")
+		   (match_operand 4 "const_int_operand" "i")
+		   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)
+		     (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)
+		      (match_dup 5)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
+
+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"        "   0")
+	     (match_operand 5 "vector_length_operand"        "  rK")
+	     (match_operand 6 "const_int_operand"            "   i")
+	     (match_operand 7 "const_int_operand"            "   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "comparison_except_ltge_operator"
+	     [(match_operand:V_VLSI 3 "register_operand"         "  vr")
+	      (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_ltge_operator"
+	     [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")
+	      (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_ltge_operator"
+	     [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")
+	      (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"        "   0")
+	     (match_operand 5 "vector_length_operand"        "  rK")
+	     (match_operand 6 "const_int_operand"            "   i")
+	     (match_operand 7 "const_int_operand"            "   i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "ltge_operator"
+	     [(match_operand:V_VLSI 3 "register_operand"         "  vr")
+	      (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_ltge<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "ltge_operator"
+	     [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")
+	      (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_ltge<mode>_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "ltge_operator"
+	     [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")
+	      (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"          "  0")
+	     (match_operand 5 "vector_length_operand"          " rK")
+	     (match_operand 6 "const_int_operand"              "  i")
+	     (match_operand 7 "const_int_operand"              "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")
+	      (vec_duplicate:V_VLSI_QHS
+	        (match_operand:<VEL> 4 "register_operand"      "  r"))])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")
+	      (vec_duplicate:V_VLSI_QHS
+	        (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")
+	      (vec_duplicate:V_VLSI_QHS
+	        (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])
+	  (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"           "  0")
+	     (match_operand 5 "vector_length_operand"           " rK")
+	     (match_operand 6 "const_int_operand"               "  i")
+	     (match_operand 7 "const_int_operand"               "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "equality_operator"
+	     [(vec_duplicate:V_VLSI_QHS
+	        (match_operand:<VEL> 4 "register_operand"       "  r"))
+	      (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSI_QHS
+	        (match_operand:<VEL> 5 "register_operand"     "    r,    r"))
+	      (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSI_QHS
+	        (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))
+	      (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"           "  0")
+	     (match_operand 5 "vector_length_operand"           " rK")
+	     (match_operand 6 "const_int_operand"               "  i")
+	     (match_operand 7 "const_int_operand"               "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_D 3 "register_operand"          " vr")
+	      (vec_duplicate:V_VLSI_D
+	        (match_operand:<VEL> 4 "register_operand"       "  r"))])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"           "  0")
+	     (match_operand 5 "vector_length_operand"           " rK")
+	     (match_operand 6 "const_int_operand"               "  i")
+	     (match_operand 7 "const_int_operand"               "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "equality_operator"
+	     [(vec_duplicate:V_VLSI_D
+	        (match_operand:<VEL> 4 "register_operand"       "  r"))
+	      (match_operand:V_VLSI_D 3 "register_operand"          " vr")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")
+	      (vec_duplicate:V_VLSI_D
+	        (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")
+	      (vec_duplicate:V_VLSI_D
+	        (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])
+	  (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSI_D
+	        (match_operand:<VEL> 5 "register_operand"     "    r,    r"))
+	      (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSI_D
+	        (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))
+	      (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"          "  0")
+	     (match_operand 5 "vector_length_operand"          " rK")
+	     (match_operand 6 "const_int_operand"              "  i")
+	     (match_operand 7 "const_int_operand"              "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_D 3 "register_operand"         " vr")
+	      (vec_duplicate:V_VLSI_D
+	        (sign_extend:<VEL>
+	          (match_operand:<VSUBEL> 4 "register_operand" "  r")))])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"          "   rK,   rK")
+	     (match_operand 7 "const_int_operand"              "    i,    i")
+	     (match_operand 8 "const_int_operand"              "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")
+	      (vec_duplicate:V_VLSI_D
+	        (sign_extend:<VEL>
+	          (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])
+	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "comparison_except_eqge_operator"
+	     [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")
+	      (vec_duplicate:V_VLSI_D
+	        (sign_extend:<VEL>
+	          (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])
+	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"            "  0")
+	     (match_operand 5 "vector_length_operand"            " rK")
+	     (match_operand 6 "const_int_operand"                "  i")
+	     (match_operand 7 "const_int_operand"                "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "equality_operator"
+	     [(vec_duplicate:V_VLSI_D
+	        (sign_extend:<VEL>
+	          (match_operand:<VSUBEL> 4 "register_operand"   "  r")))
+	      (match_operand:V_VLSI_D 3 "register_operand"           " vr")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"          "   rK,   rK")
+	     (match_operand 7 "const_int_operand"              "    i,    i")
+	     (match_operand 8 "const_int_operand"              "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSI_D
+	        (sign_extend:<VEL>
+	          (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))
+	      (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSI_D
+	        (sign_extend:<VEL>
+	          (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))
+	      (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "signed_order_operator"
+	     [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")
+	      (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vmf%B3.vv\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"          "  0")
+	     (match_operand 5 "vector_length_operand"          " rK")
+	     (match_operand 6 "const_int_operand"              "  i")
+	     (match_operand 7 "const_int_operand"              "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "signed_order_operator"
+	     [(match_operand:V_VLSF 3 "register_operand"           " vr")
+	      (match_operand:V_VLSF 4 "register_operand"           " vr")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmf%B2.vv\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "signed_order_operator"
+	     [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")
+	      (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vmf%B3.vv\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"         "  0")
+	     (match_operand 5 "vector_length_operand"         " rK")
+	     (match_operand 6 "const_int_operand"             "  i")
+	     (match_operand 7 "const_int_operand"             "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "signed_order_operator"
+	     [(match_operand:V_VLSF 3 "register_operand"      " vr")
+	      (vec_duplicate:V_VLSF
+	        (match_operand:<VEL> 4 "register_operand"     "  f"))])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmf%B2.vf\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "signed_order_operator"
+	     [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")
+	      (vec_duplicate:V_VLSF
+	        (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "signed_order_operator"
+	     [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")
+	      (vec_duplicate:V_VLSF
+	        (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "register_operand"         "  0")
+	     (match_operand 5 "vector_length_operand"         " rK")
+	     (match_operand 6 "const_int_operand"             "  i")
+	     (match_operand 7 "const_int_operand"             "  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 2 "equality_operator"
+	     [(vec_duplicate:V_VLSF
+	        (match_operand:<VEL> 4 "register_operand"     "  f"))
+	      (match_operand:V_VLSF 3 "register_operand"      " vr")])
+	  (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmf%B2.vf\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSF
+	        (match_operand:<VEL> 5 "register_operand"     "    f,    f"))
+	      (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+	(if_then_else:<VM>
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+	     (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+	     (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+	  (match_operator:<VM> 3 "equality_operator"
+	     [(vec_duplicate:V_VLSF
+	        (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))
+	      (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])
+	  (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@ (define_mode_iterator VI [
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@ (define_mode_iterator VF [
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@ (define_attr "sew" ""
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@ (define_attr "vlmul" ""
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@ (define_attr "ratio" ""
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v3 6/6] RISC-V: Add support for xtheadvector-specific intrinsics.
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
                     ` (4 preceding siblings ...)
  2023-12-20 12:34   ` [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector Jun Sha (Joshua)
@ 2023-12-20 12:36   ` Jun Sha (Joshua)
  2023-12-25  6:31     ` [PATCH v4 " Jun Sha (Joshua)
  2023-12-20 23:04   ` [PATCH v3 0/6] RISC-V: Support XTheadVector extension 钟居哲
  2023-12-20 23:08   ` [PATCH " 钟居哲
  7 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-20 12:36 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch only involves the generation of xtheadvector
special load/store instructions and vext instructions.

gcc/ChangeLog:

	* config/riscv/riscv-vector-builtins-bases.cc
	(class th_loadstore_width): Define new builtin bases.
	(BASE): Define new builtin bases.
	* config/riscv/riscv-vector-builtins-bases.h:
	Define new builtin class.
	* config/riscv/riscv-vector-builtins-functions.def (vlsegff):
	Include thead-vector-builtins-functions.def.
	* config/riscv/riscv-vector-builtins-shapes.cc
	(struct th_loadstore_width_def): Define new builtin shapes.
	(struct th_indexed_loadstore_width_def):
	Define new builtin shapes.
	(SHAPE): Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-shapes.h:
	Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-types.def
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	(vint8m1_t): Add datatypes for XTheadVector.
	(vint8m2_t): Likewise.
	(vint8m4_t): Likewise.
	(vint8m8_t): Likewise.
	(vint16m1_t): Likewise.
	(vint16m2_t): Likewise.
	(vint16m4_t): Likewise.
	(vint16m8_t): Likewise.
	(vint32m1_t): Likewise.
	(vint32m2_t): Likewise.
	(vint32m4_t): Likewise.
	(vint32m8_t): Likewise.
	(vint64m1_t): Likewise.
	(vint64m2_t): Likewise.
	(vint64m4_t): Likewise.
	(vint64m8_t): Likewise.
	(vuint8m1_t): Likewise.
	(vuint8m2_t): Likewise.
	(vuint8m4_t): Likewise.
	(vuint8m8_t): Likewise.
	(vuint16m1_t): Likewise.
	(vuint16m2_t): Likewise.
	(vuint16m4_t): Likewise.
	(vuint16m8_t): Likewise.
	(vuint32m1_t): Likewise.
	(vuint32m2_t): Likewise.
	(vuint32m4_t): Likewise.
	(vuint32m8_t): Likewise.
	(vuint64m1_t): Likewise.
	(vuint64m2_t): Likewise.
	(vuint64m4_t): Likewise.
	(vuint64m8_t): Likewise.
	* config/riscv/riscv-vector-builtins.cc
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	* config/riscv/thead-vector-builtins-functions.def: New file.
	* config/riscv/thead-vector.md: Add new patterns.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c: New test.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 .../riscv/riscv-vector-builtins-shapes.cc     | 126 +++++++
 .../riscv/riscv-vector-builtins-shapes.h      |   3 +
 .../riscv/riscv-vector-builtins-types.def     | 120 +++++++
 gcc/config/riscv/riscv-vector-builtins.cc     | 308 +++++++++++++++++-
 .../riscv/thead-vector-builtins-functions.def |  32 ++
 gcc/config/riscv/thead-vector-builtins.cc     | 141 ++++++++
 gcc/config/riscv/thead-vector-builtins.h      |  31 ++
 gcc/config/riscv/thead-vector.md              | 255 ++++++++++++++-
 .../riscv/rvv/xtheadvector/vlb-vsb.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlbu-vsb.c         |  68 ++++
 .../riscv/rvv/xtheadvector/vlh-vsh.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlhu-vsh.c         |  68 ++++
 .../riscv/rvv/xtheadvector/vlw-vsw.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlwu-vsw.c         |  68 ++++
 14 files changed, 1422 insertions(+), 2 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 6b49404a1fa..7d7c1f6f4b1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -211,6 +211,104 @@ struct indexed_loadstore_def : public function_shape
   }
 };
 
+/* th_loadstore_width_def class.  */
+struct th_loadstore_width_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return nullptr;
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
+
+/* th_indexed_loadstore_width_def class.  */
+struct th_indexed_loadstore_width_def : public function_shape
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    for (unsigned int pred_idx = 0; group.preds[pred_idx] != NUM_PRED_TYPES;
+	 ++pred_idx)
+      {
+	for (unsigned int vec_type_idx = 0;
+	     group.ops_infos.types[vec_type_idx].index != NUM_VECTOR_TYPES;
+	     ++vec_type_idx)
+	  {
+	   tree index_type = group.ops_infos.args[1].get_tree_type (
+	      group.ops_infos.types[vec_type_idx].index);
+	   if (!index_type)
+	      continue;
+	   build_one (b, group, pred_idx, vec_type_idx);
+	  }
+      }
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
 /* alu_def class.  */
 struct alu_def : public build_base
 {
@@ -632,6 +730,31 @@ struct reduc_alu_def : public build_base
   }
 };
 
+/* th_extract_def class.  */
+struct th_extract_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+      bool overloaded_p) const override
+  {
+    b.append_base_name (instance.base_name);
+    if (overloaded_p)
+      return b.finish_name ();
+    b.append_name (type_suffixes[instance.type.index].vector);
+    b.append_name (type_suffixes[instance.type.index].scalar);
+    return b.finish_name ();
+  }
+};
+
 /* scalar_move_def class.  */
 struct scalar_move_def : public build_base
 {
@@ -1011,6 +1134,8 @@ SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
 SHAPE(indexed_loadstore, indexed_loadstore)
+SHAPE(th_loadstore_width, th_loadstore_width)
+SHAPE(th_indexed_loadstore_width, th_indexed_loadstore_width)
 SHAPE(alu, alu)
 SHAPE(alu_frm, alu_frm)
 SHAPE(widen_alu, widen_alu)
@@ -1023,6 +1148,7 @@ SHAPE(move, move)
 SHAPE(mask_alu, mask_alu)
 SHAPE(reduc_alu, reduc_alu)
 SHAPE(reduc_alu_frm, reduc_alu_frm)
+SHAPE(th_extract, th_extract)
 SHAPE(scalar_move, scalar_move)
 SHAPE(vundefined, vundefined)
 SHAPE(misc, misc)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..a822ba05bdd 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -28,6 +28,8 @@ extern const function_shape *const vsetvl;
 extern const function_shape *const vsetvlmax;
 extern const function_shape *const loadstore;
 extern const function_shape *const indexed_loadstore;
+extern const function_shape *const th_loadstore_width;
+extern const function_shape *const th_indexed_loadstore_width;
 extern const function_shape *const alu;
 extern const function_shape *const alu_frm;
 extern const function_shape *const widen_alu;
@@ -41,6 +43,7 @@ extern const function_shape *const mask_alu;
 extern const function_shape *const reduc_alu;
 extern const function_shape *const reduc_alu_frm;
 extern const function_shape *const scalar_move;
+extern const function_shape *const th_extract;
 extern const function_shape *const vundefined;
 extern const function_shape *const misc;
 extern const function_shape *const vset;
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 6aa45ae9a7e..e373d29e51c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -24,12 +24,48 @@ along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_I_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_I8_OPS" macro include some signed integer (i8/i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I8_OPS
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I16_OPS" macro include some signed integer (i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I16_OPS
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I32_OPS" macro include some signed integer (i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I32_OPS
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_U_OPS
 #define DEF_RVV_U_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_U8_OPS" macro include some unsigned integer (i8/i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U8_OPS
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U16_OPS" macro include some unsigned integer (i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U16_OPS
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U32_OPS" macro include some unsigned integer (i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U32_OPS
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_F_OPS
@@ -362,6 +398,45 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_I8_OPS (vint8m1_t, 0)
+DEF_RVV_I8_OPS (vint8m2_t, 0)
+DEF_RVV_I8_OPS (vint8m4_t, 0)
+DEF_RVV_I8_OPS (vint8m8_t, 0)
+DEF_RVV_I8_OPS (vint16m1_t, 0)
+DEF_RVV_I8_OPS (vint16m2_t, 0)
+DEF_RVV_I8_OPS (vint16m4_t, 0)
+DEF_RVV_I8_OPS (vint16m8_t, 0)
+DEF_RVV_I8_OPS (vint32m1_t, 0)
+DEF_RVV_I8_OPS (vint32m2_t, 0)
+DEF_RVV_I8_OPS (vint32m4_t, 0)
+DEF_RVV_I8_OPS (vint32m8_t, 0)
+DEF_RVV_I8_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I16_OPS (vint16m1_t, 0)
+DEF_RVV_I16_OPS (vint16m2_t, 0)
+DEF_RVV_I16_OPS (vint16m4_t, 0)
+DEF_RVV_I16_OPS (vint16m8_t, 0)
+DEF_RVV_I16_OPS (vint32m1_t, 0)
+DEF_RVV_I16_OPS (vint32m2_t, 0)
+DEF_RVV_I16_OPS (vint32m4_t, 0)
+DEF_RVV_I16_OPS (vint32m8_t, 0)
+DEF_RVV_I16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I32_OPS (vint32m1_t, 0)
+DEF_RVV_I32_OPS (vint32m2_t, 0)
+DEF_RVV_I32_OPS (vint32m4_t, 0)
+DEF_RVV_I32_OPS (vint32m8_t, 0)
+DEF_RVV_I32_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_U_OPS (vuint8mf4_t, 0)
 DEF_RVV_U_OPS (vuint8mf2_t, 0)
@@ -385,6 +460,45 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_U8_OPS (vuint8m1_t, 0)
+DEF_RVV_U8_OPS (vuint8m2_t, 0)
+DEF_RVV_U8_OPS (vuint8m4_t, 0)
+DEF_RVV_U8_OPS (vuint8m8_t, 0)
+DEF_RVV_U8_OPS (vuint16m1_t, 0)
+DEF_RVV_U8_OPS (vuint16m2_t, 0)
+DEF_RVV_U8_OPS (vuint16m4_t, 0)
+DEF_RVV_U8_OPS (vuint16m8_t, 0)
+DEF_RVV_U8_OPS (vuint32m1_t, 0)
+DEF_RVV_U8_OPS (vuint32m2_t, 0)
+DEF_RVV_U8_OPS (vuint32m4_t, 0)
+DEF_RVV_U8_OPS (vuint32m8_t, 0)
+DEF_RVV_U8_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U16_OPS (vuint16m1_t, 0)
+DEF_RVV_U16_OPS (vuint16m2_t, 0)
+DEF_RVV_U16_OPS (vuint16m4_t, 0)
+DEF_RVV_U16_OPS (vuint16m8_t, 0)
+DEF_RVV_U16_OPS (vuint32m1_t, 0)
+DEF_RVV_U16_OPS (vuint32m2_t, 0)
+DEF_RVV_U16_OPS (vuint32m4_t, 0)
+DEF_RVV_U16_OPS (vuint32m8_t, 0)
+DEF_RVV_U16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U32_OPS (vuint32m1_t, 0)
+DEF_RVV_U32_OPS (vuint32m2_t, 0)
+DEF_RVV_U32_OPS (vuint32m4_t, 0)
+DEF_RVV_U32_OPS (vuint32m8_t, 0)
+DEF_RVV_U32_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
@@ -1356,7 +1470,13 @@ DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
 
 #undef DEF_RVV_I_OPS
+#undef DEF_RVV_I8_OPS
+#undef DEF_RVV_I16_OPS
+#undef DEF_RVV_I32_OPS
 #undef DEF_RVV_U_OPS
+#undef DEF_RVV_U8_OPS
+#undef DEF_RVV_U16_OPS
+#undef DEF_RVV_U32_OPS
 #undef DEF_RVV_F_OPS
 #undef DEF_RVV_B_OPS
 #undef DEF_RVV_WEXTI_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index f5f9000d89c..500b3b05e4b 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -247,6 +247,63 @@ static const rvv_type_info iu_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u8_ops[] = {
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u16_ops[] = {
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u32_ops[] = {
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 /* A list of all types will be registered for intrinsic functions.  */
 static const rvv_type_info all_ops[] = {
 #define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
@@ -914,7 +971,32 @@ static CONSTEXPR const rvv_arg_type_info tuple_vcreate_args[]
 
 /* A list of args for vector_type func (vector_type) function.  */
 static CONSTEXPR const rvv_arg_type_info ext_vcreate_args[]
-  = {rvv_arg_type_info (RVV_BASE_vector),
+  = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, size_t)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, eew8_index_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, eew8_index_type, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, size_t, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
      rvv_arg_type_info_end};
 
 /* A list of none preds that will be registered for intrinsic functions.  */
@@ -1430,6 +1512,14 @@ static CONSTEXPR const rvv_op_info iu_shift_vvv_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      shift_vv_args /* Args */};
 
+/* A static operand information for scalar_type func (vector_type, size_t)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_x_s_u_ops
+  = {iu_ops,          /* Types */
+     OP_TYPE_vx,        /* Suffix */
+     rvv_arg_type_info (RVV_BASE_scalar), /* Return type */
+     v_size_args /* Args */};
+
 /* A static operand information for vector_type func (vector_type, size_t)
  * function registration. */
 static CONSTEXPR const rvv_op_info iu_shift_vvx_ops
@@ -2605,6 +2695,222 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
      rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
      ext_vcreate_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args  */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_size_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_size_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_size_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_size_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_size_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_size_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_index_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_index_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_index_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_index_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_index_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_index_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew8_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_index_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew16_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_index_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew32_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_index_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type,
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_size_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_size_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_size_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
 /* A list of all RVV base function types.  */
 static CONSTEXPR const function_type_info function_types[] = {
 #define DEF_RVV_TYPE_INDEX(                                                    \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
index a85ca24cb31..e3df519fe19 100644
--- a/gcc/config/riscv/thead-vector-builtins-functions.def
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -621,6 +621,38 @@ DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds
 DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
 DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
 DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+
+DEF_RVV_FUNCTION (th_vlb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlhu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlwu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vsb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vlsb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlshu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlswu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlxb, th_indexed_loadstore_width, full_preds, i8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxh, th_indexed_loadstore_width, full_preds, i16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxw, th_indexed_loadstore_width, full_preds, i32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxbu, th_indexed_loadstore_width, full_preds, u8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxhu, th_indexed_loadstore_width, full_preds, u16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxwu, th_indexed_loadstore_width, full_preds, u32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vext_x_v, th_extract, none_preds, iu_x_s_u_ops)
 #undef REQUIRED_EXTENSIONS
 
 #undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
index 9d84ed39937..91a9b5b391e 100644
--- a/gcc/config/riscv/thead-vector-builtins.cc
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -91,6 +91,68 @@ public:
   }
 };
 
+
+/* Implements
+ * th.vl(b/h/w)[u].v/th.vs(b/h/w)[u].v/th.vls(b/h/w)[u].v/th.vss(b/h/w)[u].v/
+ * th.vlx(b/h/w)[u].v/th.vs[u]x(b/h/w).v
+ * codegen.  */
+template<bool STORE_P, lst_type LST_TYPE, int UNSPEC>
+class th_loadstore_width : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return !STORE_P; }
+  bool apply_mask_policy_p () const override { return !STORE_P; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P || LST_TYPE == LST_INDEXED)
+      return true;
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    if (LST_TYPE == LST_INDEXED)
+      {
+	if (STORE_P)
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_store_width (UNSPEC, UNSPEC,
+					       e.vector_mode ()));
+	else
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else if (LST_TYPE == LST_STRIDED)
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_strided_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_strided_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov_width (UNSPEC, e.vector_mode ()));
+      }
+  }
+};
+
+
 /* Implements
  * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v
  * codegen.  */
@@ -618,6 +680,23 @@ public:
   }
 };
 
+/* Implements vext.x.v.  */
+class th_extract : public function_base
+{
+public:
+  bool apply_vl_p () const override { return false; }
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    return e.use_exact_insn (code_for_pred_th_extract (e.vector_mode ()));
+  }
+};
+
 static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;
 static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;
 static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;
@@ -677,6 +756,37 @@ static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;
 static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;
 static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;
 static CONSTEXPR const th_vlsegff th_vlsegff_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vlb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLBU> th_vlbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vlh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLHU> th_vlhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vlw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLWU> th_vlwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vsb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vsh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSB> th_vlsb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSBU> th_vlsbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSH> th_vlsh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSHU> th_vlshu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSW> th_vlsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSWU> th_vlswu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSB> th_vssb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSH> th_vssh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSW> th_vssw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXB> th_vlxb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXBU> th_vlxbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXH> th_vlxh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXHU> th_vlxhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXW> th_vlxw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXWU> th_vlxwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXB> th_vsxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXH> th_vsxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXW> th_vsxw_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXB> th_vsuxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXH> th_vsuxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXW> th_vsuxw_obj;
+static CONSTEXPR const th_extract th_vext_x_v_obj;
 
 /* Declare the function base NAME, pointing it to an instance
    of class <NAME>_obj.  */
@@ -742,5 +852,36 @@ BASE (th_vloxseg)
 BASE (th_vsuxseg)
 BASE (th_vsoxseg)
 BASE (th_vlsegff)
+BASE (th_vlb)
+BASE (th_vlh)
+BASE (th_vlw)
+BASE (th_vlbu)
+BASE (th_vlhu)
+BASE (th_vlwu)
+BASE (th_vsb)
+BASE (th_vsh)
+BASE (th_vsw)
+BASE (th_vlsb)
+BASE (th_vlsh)
+BASE (th_vlsw)
+BASE (th_vlsbu)
+BASE (th_vlshu)
+BASE (th_vlswu)
+BASE (th_vssb)
+BASE (th_vssh)
+BASE (th_vssw)
+BASE (th_vlxb)
+BASE (th_vlxh)
+BASE (th_vlxw)
+BASE (th_vlxbu)
+BASE (th_vlxhu)
+BASE (th_vlxwu)
+BASE (th_vsxb)
+BASE (th_vsxh)
+BASE (th_vsxw)
+BASE (th_vsuxb)
+BASE (th_vsuxh)
+BASE (th_vsuxw)
+BASE (th_vext_x_v)
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
index d0bf00b8e81..40f6ed3e3e8 100644
--- a/gcc/config/riscv/thead-vector-builtins.h
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -85,6 +85,37 @@ extern const function_base *const th_vloxseg;
 extern const function_base *const th_vsuxseg;
 extern const function_base *const th_vsoxseg;
 extern const function_base *const th_vlsegff;
+extern const function_base *const th_vlb;
+extern const function_base *const th_vlh;
+extern const function_base *const th_vlw;
+extern const function_base *const th_vlbu;
+extern const function_base *const th_vlhu;
+extern const function_base *const th_vlwu;
+extern const function_base *const th_vsb;
+extern const function_base *const th_vsh;
+extern const function_base *const th_vsw;
+extern const function_base *const th_vlsb;
+extern const function_base *const th_vlsh;
+extern const function_base *const th_vlsw;
+extern const function_base *const th_vlsbu;
+extern const function_base *const th_vlshu;
+extern const function_base *const th_vlswu;
+extern const function_base *const th_vssb;
+extern const function_base *const th_vssh;
+extern const function_base *const th_vssw;
+extern const function_base *const th_vlxb;
+extern const function_base *const th_vlxh;
+extern const function_base *const th_vlxw;
+extern const function_base *const th_vlxbu;
+extern const function_base *const th_vlxhu;
+extern const function_base *const th_vlxwu;
+extern const function_base *const th_vsxb;
+extern const function_base *const th_vsxh;
+extern const function_base *const th_vsxw;
+extern const function_base *const th_vsuxb;
+extern const function_base *const th_vsuxh;
+extern const function_base *const th_vsuxw;
+extern const function_base *const th_vext_x_v;
 }
 
 } // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
index 072fb5e68e1..b6f5d64fc26 100644
--- a/gcc/config/riscv/thead-vector.md
+++ b/gcc/config/riscv/thead-vector.md
@@ -1,7 +1,74 @@
 (define_c_enum "unspec" [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW
+  UNSPEC_TH_VLWU
+
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW
+  UNSPEC_TH_VLSWU
+
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VLXWU
+
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+
   UNSPEC_TH_VWLDST
 ])
 
+(define_int_iterator UNSPEC_TH_VLMEM_OP [
+  UNSPEC_TH_VLB UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW UNSPEC_TH_VLWU
+])
+
+(define_int_iterator UNSPEC_TH_VLSMEM_OP [
+  UNSPEC_TH_VLSB UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW UNSPEC_TH_VLSWU
+])
+
+(define_int_iterator UNSPEC_TH_VLXMEM_OP [
+  UNSPEC_TH_VLXB UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW UNSPEC_TH_VLXWU
+])
+
+(define_int_attr vlmem_op_attr [
+  (UNSPEC_TH_VLB "b") (UNSPEC_TH_VLBU "bu")
+  (UNSPEC_TH_VLH "h") (UNSPEC_TH_VLHU "hu")
+  (UNSPEC_TH_VLW "w") (UNSPEC_TH_VLWU "wu")
+  (UNSPEC_TH_VLSB "b") (UNSPEC_TH_VLSBU "bu")
+  (UNSPEC_TH_VLSH "h") (UNSPEC_TH_VLSHU "hu")
+  (UNSPEC_TH_VLSW "w") (UNSPEC_TH_VLSWU "wu")
+  (UNSPEC_TH_VLXB "b") (UNSPEC_TH_VLXBU "bu")
+  (UNSPEC_TH_VLXH "h") (UNSPEC_TH_VLXHU "hu")
+  (UNSPEC_TH_VLXW "w") (UNSPEC_TH_VLXWU "wu")
+  (UNSPEC_TH_VSUXB "b")
+  (UNSPEC_TH_VSUXH "h")
+  (UNSPEC_TH_VSUXW "w")
+])
+
+(define_int_attr vlmem_order_attr [
+  (UNSPEC_TH_VLXB "")
+  (UNSPEC_TH_VLXH "")
+  (UNSPEC_TH_VLXW "")
+  (UNSPEC_TH_VSUXB "u")
+  (UNSPEC_TH_VSUXH "u")
+  (UNSPEC_TH_VSUXW "u")
+])
+
 (define_int_attr th_order [
   (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")
 ])
@@ -21,6 +88,27 @@ (define_code_iterator not_unop [not])
 (define_code_iterator any_float_unop_neg [neg])
 (define_code_iterator any_float_unop_abs [abs])
 
+(define_int_iterator UNSPEC_TH_VSMEM_OP [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLW
+])
+
+(define_int_iterator UNSPEC_TH_VSSMEM_OP [
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSW
+])
+
+(define_int_iterator UNSPEC_TH_VSXMEM_OP [
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+])
+
 (define_mode_iterator V_VLS_VT [V VLS VT])
 (define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
 
@@ -1853,6 +1941,171 @@ (define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "SI")])
 
+(define_expand "@pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand")
+	 (match_operand 4 "vector_length_operand")
+	 (match_operand 5 "const_int_operand")
+	 (match_operand 6 "const_int_operand")
+	 (match_operand 7 "const_int_operand")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "vector_move_operand")
+      (match_operand:V_VLS 2 "vector_merge_operand")))]
+  "TARGET_XTHEADVECTOR"
+  {})
+
+(define_insn_and_split "*pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"	    "=vr,    vr,    vd,     m,    vr,    vr")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand"	   "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")
+	 (match_operand 4 "vector_length_operand"	      "   rK,    rK,    rK,    rK,    rK,    rK")
+	 (match_operand 5 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 6 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 7 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "reg_or_mem_operand"	      "    m,     m,     m,    vr,    vr,    vr")
+      (match_operand:V_VLS 2 "vector_merge_operand"	    "    0,    vu,    vu,    vu,    vu,     0")))]
+  "(TARGET_XTHEADVECTOR
+    && (register_operand (operands[0], <MODE>mode)
+	|| register_operand (operands[3], <MODE>mode)))"
+  "@
+   vl<vlmem_op_attr>.v\t%0,%3%p1
+   vl<vlmem_op_attr>.v\t%0,%3
+   vl<vlmem_op_attr>.v\t%0,%3,%1.t
+   vs<vlmem_op_attr>.v\t%3,%0%p1
+   vmv.v.v\t%0,%3
+   vmv.v.v\t%0,%3"
+  "&& register_operand (operands[0], <MODE>mode)
+   && register_operand (operands[3], <MODE>mode)
+   && satisfies_constraint_vu (operands[2])
+   && INTVAL (operands[7]) == riscv_vector::VLMAX"
+  [(set (match_dup 0) (match_dup 3))]
+  ""
+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 3 "vector_length_operand"    "   rK")
+	     (match_operand 4 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSMEM_OP)
+	  (match_operand:VI 2 "register_operand"	 "    vr")
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vs<vlmem_op_attr>.v\t%2,%0%p1"
+  [(set_attr "type" "vste")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 4))
+   (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_strided_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	      "=vr,    vr,    vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 7 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 8 "const_int_operand"	"    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLSMEM_OP)
+	  (unspec:VI
+	    [(match_operand:VI 3 "memory_operand"	 "    m,     m,     m")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")] UNSPEC_TH_VLSMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"      "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vls<vlmem_op_attr>.v\t%0,%3,%z4%p1"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSSMEM_OP)
+	  (unspec:VI
+	    [(match_operand 2 "pmode_reg_or_0_operand"   "   rJ")
+	     (match_operand:VI 3 "register_operand"       "   vr")] UNSPEC_TH_VSSMEM_OP)
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vss<vlmem_op_attr>.v\t%3,%0,%z2%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_indexed_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	     "=vd, vr,vd, vr")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")
+	     (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")
+	     (match_operand 6 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 7 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 8 "const_int_operand"	 "  i,  i, i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLXMEM_OP)
+	  (unspec:VI
+	    [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:VI 4 "register_operand" " vr, vr,vr, vr")] UNSPEC_TH_VLXMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlx<vlmem_op_attr>.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vldux")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_indexed_<vlmem_order_attr>store_width<vlmem_op_attr><mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSXMEM_OP)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:VI 2 "register_operand" "  vr")
+	   (match_operand:VI 3 "register_operand"  "  vr")] UNSPEC_TH_VSXMEM_OP))]
+  "TARGET_XTHEADVECTOR"
+  "vs<vlmem_order_attr>x<vlmem_op_attr>.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_th_extract<mode>"
+  [(set (match_operand:<VEL> 0 "register_operand")
+	(unspec:<VEL>
+	  [(vec_select:<VEL>
+	     (match_operand:V_VLSI 1 "register_operand")
+	     (parallel [(match_operand:DI 2 "register_operand" "r")]))
+	   (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
+  "TARGET_XTHEADVECTOR"
+{})
+
+(define_insn "*pred_th_extract<mode>"
+  [(set (match_operand:<VEL> 0 "register_operand"   "=r")
+  (unspec:<VEL>
+    [(vec_select:<VEL>
+       (match_operand:V_VLSI 1 "register_operand" "vr")
+       (parallel [(match_operand:DI 2 "register_operand" "r")]))
+     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
+  "TARGET_XTHEADVECTOR"
+  "vext.x.v\t%0,%1,%2"
+  [(set_attr "type" "vimovvx")
+   (set_attr "mode" "<MODE>")])
+
 (define_insn "*pred_th_cmp<mode>_merge_tie_mask"
   [(set (match_operand:<VM> 0 "register_operand"              "=vm")
 	(if_then_else:<VM>
@@ -2571,4 +2824,4 @@ (define_insn "*pred_th_eqne<mode>_scalar_narrow"
   "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
   "vmf%B3.vf\t%0,%4,%5%p1"
   [(set_attr "type" "vfcmp")
-   (set_attr "mode" "<MODE>")])
\ No newline at end of file
+   (set_attr "mode" "<MODE>")])
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
new file mode 100644
index 00000000000..4e192bbf025
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out)
+{
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tu (v3, v2, v2, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_m (mask, v3, v3, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tumu (mask, v3, v2, v2, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
new file mode 100644
index 00000000000..1538afec68e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
new file mode 100644
index 00000000000..bf4924a1d76
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
new file mode 100644
index 00000000000..8c451845175
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
new file mode 100644
index 00000000000..0f5b09684a5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
new file mode 100644
index 00000000000..aaa75be023d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 12:34   ` [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector Jun Sha (Joshua)
@ 2023-12-20 14:00     ` 钟居哲
  2023-12-20 14:24       ` 回复:[PATCH " joshua
  2023-12-25  6:29     ` [PATCH v4 " Jun Sha (Joshua)
  1 sibling, 1 reply; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 14:00 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, cooper.joshua, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 257180 bytes --]

+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)

Why do you add these ?

+(define_insn "@pred_th_unit_strided_store<mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+       (match_operand 3 "vector_length_operand"    "   rK")
+       (match_operand 4 "const_int_operand"        "    i")
+       (reg:SI VL_REGNUM)
+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")
+    (match_operand:VT 2 "register_operand"         "   vr")
+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+  "TARGET_XTHEADVECTOR"
+  "vsseg<nf>e.v\t%2,(%z1)%p0"
+  [(set_attr "type" "vssegte")
+   (set_attr "mode" "<MODE>")])

These patterns are redundant just names are different.
They should be removed.


juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-20 20:34
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
This patch is to handle the differences in instruction generation
between Vector and XTheadVector, adding th. prefix
to all XTheadVector instructions is not included.
 
For some vector patterns that cannot be avoided, we use
!TARGET_XTHEADVECTOR to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/t-riscv: Add new files.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector-builtins-functions.def: New file.
* config/riscv/thead-vector-builtins.cc: New file.
* config/riscv/thead-vector-builtins.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |    4 +-
gcc/config/riscv/autovec.md                   |    2 +-
gcc/config/riscv/predicates.md                |    8 +-
gcc/config/riscv/riscv-string.cc              |    3 +
gcc/config/riscv/riscv-v.cc                   |   13 +-
.../riscv/riscv-vector-builtins-shapes.cc     |   23 +
gcc/config/riscv/riscv-vector-builtins.cc     |    7 +
gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
gcc/config/riscv/riscv-vector-switch.def      |  150 +-
gcc/config/riscv/riscv.cc                     |   20 +-
gcc/config/riscv/riscv_th_vector.h            |   49 +
gcc/config/riscv/t-riscv                      |   16 +
.../riscv/thead-vector-builtins-functions.def |  627 ++++
gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++
gcc/config/riscv/thead-vector-builtins.h      |   92 +
gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++
gcc/config/riscv/vector-iterators.md          |  186 +-
gcc/config/riscv/vector.md                    |   36 +-
.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
gcc/testsuite/lib/target-supports.exp         |   12 +
21 files changed, 4386 insertions(+), 191 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,9 +547,9 @@ riscv*)
extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
- extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 1a3a4f1ecbb..d910367e59c 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@ (define_predicate "csr_operand"
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 486f5deb296..710332e17db 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..f5f9000d89c 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
#include "riscv-vector-builtins.h"
#include "riscv-vector-builtins-shapes.h"
#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
using namespace riscv_vector;
@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {
#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
   {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
#include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \
+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
};
/* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..bb463510dd2 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
   ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */
   ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */
   ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */
+  XTHEADVECTOR_EXT,   /* XTheadVector extension */
};
/* Enumerates the RVV operand types.  */
@@ -233,7 +234,7 @@ struct function_group_info
     switch (ext_value)
     {
       case VECTOR_EXT:
-        return TARGET_VECTOR;
+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);
       case ZVBB_EXT:
         return TARGET_ZVBB;
       case ZVBB_OR_ZVKB_EXT:
@@ -252,6 +253,8 @@ struct function_group_info
         return TARGET_ZVKSED;
       case ZVKSH_EXT:
         return TARGET_ZVKSH;
+      case XTHEADVECTOR_EXT:
+ return TARGET_XTHEADVECTOR;
       default:
         gcc_unreachable ();
     }
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index d3010bed8d8..18cc64b63e6 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
   $(RISCV_BUILTINS_H)
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
+thead-vector-builtins.o: \
+  $(srcdir)/config/riscv/thead-vector-builtins.cc \
+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+  rtx-vector-builder.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(RISCV_BUILTINS_H)
+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc
+
riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
   $(SYSTEM_H) $(TM_H)
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..a85ca24cb31
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,627 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#ifndef DEF_THEAD_RVV_FUNCTION
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+/* Internal helper functions for gimple fold use.  */
+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)
+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)
+
+/* 6. Configuration-Setting Instructions.  */
+
+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)
+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+
+/* 7. Vector Loads and Stores. */
+
+// 7.4. Vector Unit-Stride Instructions
+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+
+// 7.5. Vector Strided Instructions
+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
+
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+
+// 7.7. Unit-stride Fault-Only-First Loads
+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)
+
+// TODO: 7.8. Vector Load/Store Segment Instructions
+
+/* 11. Vector Integer Arithmetic Instructions.  */
+
+// 11.1. Vector Single-Width Integer Add and Subtract
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)
+
+// 11.2. Vector Widening Integer Add/Subtract
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)
+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)
+
+// 11.3. Vector Integer Extension
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)
+
+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)
+
+// 11.5. Vector Bitwise Logical Instructions
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)
+
+// 11.6. Vector Single-Width Shift Instructions
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)
+
+// 11.7. Vector Narrowing Integer Right Shift Instructions
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)
+
+// 11.8. Vector Integer Compare Instructions
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)
+
+// 11.9. Vector Integer Min/Max Instructions
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)
+
+// 11.10. Vector Single-Width Integer Multiply Instructions
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)
+
+// 11.11. Vector Integer Divide Instructions
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)
+
+// 11.12. Vector Widening Integer Multiply Instructions
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)
+
+// 11.13. Vector Single-Width Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)
+
+// 11.14. Vector Widening Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)
+
+// 11.15. Vector Integer Merge Instructions
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+
+// 11.16 Vector Integer Move Instructions
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)
+
+/* 12. Vector Fixed-Point Arithmetic Instructions. */
+
+// 12.1. Vector Single-Width Saturating Add and Subtract
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)
+
+// 12.2. Vector Single-Width Averaging Add and Subtract
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)
+
+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)
+
+// 12.4. Vector Single-Width Scaling Shift Instructions
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)
+
+// 12.5. Vector Narrowing Fixed-Point Clip Instructions
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+
+/* 13. Vector Floating-Point Instructions.  */
+
+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)
+
+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.5. Vector Widening Floating-Point Multiply
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)
+
+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)
+
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+
+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)
+
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+
+// 13.8. Vector Floating-Point Square-Root Instruction
+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)
+
+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.11. Vector Floating-Point MIN/MAX Instructions
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)
+
+// 13.12. Vector Floating-Point Sign-Injection Instructions
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)
+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)
+
+// 13.13. Vector Floating-Point Compare Instructions
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)
+
+// 13.14. Vector Floating-Point Classify Instruction
+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)
+
+// 13.15. Vector Floating-Point Merge Instruction
+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)
+
+// 13.16. Vector Floating-Point Move Instruction
+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)
+
+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)
+
+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)
+
+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)
+
+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)
+
+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)
+
+/* 14. Vector Reduction Operations.  */
+
+// 14.1. Vector Single-Width Integer Reduction Instructions
+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)
+
+// 14.2. Vector Widening Integer Reduction Instructions
+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)
+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)
+
+// 14.3. Vector Single-Width Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+
+// 14.4. Vector Widening Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+
+/* 15. Vector Mask Instructions.  */
+
+// 15.1. Vector Mask-Register Logical Instructions
+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)
+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)
+// 15.2. Vector count population in mask vcpop.m
+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)
+// 15.3. vfirst find-first-set mask bit
+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)
+// 15.4. vmsbf.m set-before-first mask bit
+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.5. vmsif.m set-including-first mask bit
+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.6. vmsof.m set-only-first mask bit
+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.8. Vector Iota Instruction
+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)
+// 15.9. Vector Element Index Instruction
+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)
+
+/* 16. Vector Permutation Instructions.  */
+
+// 16.1. Integer Scalar Move Instructions
+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)
+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)
+
+// 16.2. Floating-Point Scalar Move Instructions
+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)
+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)
+
+// 16.3. Vector Slide Instructions
+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)
+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)
+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)
+
+// 16.4. Vector Register Gather Instructions
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)
+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)
+
+// 16.5. Vector Compress Instruction
+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)
+
+/* Miscellaneous Vector Functions.  */
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)
+
+// Tuple types
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)
+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
+#undef DEF_THEAD_RVV_FUNCTION
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..9d84ed39937
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,746 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */
+template<bool VLMAX_P>
+class th_vsetvl : public function_base
+{
+public:
+  bool apply_vl_p () const override
+  {
+    return false;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (VLMAX_P)
+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));
+    else
+      e.add_input_operand (0);
+
+    tree type = builtin_types[e.type.index].vector;
+    machine_mode mode = TYPE_MODE (type);
+
+    machine_mode inner_mode = GET_MODE_INNER (mode);
+    /* SEW.  */
+    e.add_input_operand (Pmode,
+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));
+
+    /* LMUL.  */
+    e.add_input_operand (Pmode,
+      gen_int_mode (get_vlmul (mode), Pmode));
+
+    /* TAIL_ANY.  */
+    e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_tail_policy (), Pmode));
+
+    /* MASK_ANY.  */
+    e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_mask_policy (), Pmode));
+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+  }
+};
+
+/* Implements
+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v
+ * codegen.  */
+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>
+class th_loadstore : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return !STORE_P; }
+  bool apply_mask_policy_p () const override { return !STORE_P; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P || LST_TYPE == LST_INDEXED)
+      return true;
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (LST_TYPE == LST_INDEXED)
+      {
+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;
+ if (STORE_P)
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_store (unspec, e.vector_mode (),
+       e.index_mode ()));
+ else
+   {
+     unsigned src_eew_bitsize
+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));
+     unsigned dst_eew_bitsize
+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));
+     if (dst_eew_bitsize == src_eew_bitsize)
+       {
+ return e.use_exact_insn (
+   code_for_pred_th_indexed_load_same_eew (
+     unspec, e.vector_mode ()));
+       }
+     else if (dst_eew_bitsize > src_eew_bitsize)
+       {
+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;
+ switch (factor)
+   {
+   case 2:
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_load_x2_greater_eew (
+ unspec, e.vector_mode ()));
+   case 4:
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_load_x4_greater_eew (
+ unspec, e.vector_mode ()));
+   case 8:
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_load_x8_greater_eew (
+ unspec, e.vector_mode ()));
+   default:
+     gcc_unreachable ();
+   }
+       }
+     else
+       {
+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;
+ switch (factor)
+   {
+   case 2:
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_load_x2_smaller_eew (
+ unspec, e.vector_mode ()));
+   case 4:
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_load_x4_smaller_eew (
+ unspec, e.vector_mode ()));
+   case 8:
+     return e.use_exact_insn (
+       code_for_pred_th_indexed_load_x8_smaller_eew (
+ unspec, e.vector_mode ()));
+   default:
+     gcc_unreachable ();
+   }
+       }
+   }
+      }
+    else if (LST_TYPE == LST_STRIDED)
+      {
+ if (STORE_P)
+   return e.use_contiguous_store_insn (
+     code_for_pred_th_strided_store (e.vector_mode ()));
+ else
+   return e.use_contiguous_load_insn (
+     code_for_pred_th_strided_load (e.vector_mode ()));
+      }
+    else
+      {
+ if (STORE_P)
+   return e.use_contiguous_store_insn (
+     code_for_pred_th_store (e.vector_mode ()));
+ else
+   return e.use_contiguous_load_insn (
+     code_for_pred_mov (e.vector_mode ()));
+      }
+  }
+};
+
+/* Implements vneg/vnot.  */
+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>
+class th_unop : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));
+  }
+};
+
+/* Implements vnsrl/vnsra.  */
+template<rtx_code CODE>
+class th_vnshift : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_wx:
+ return e.use_exact_insn (
+   code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));
+      case OP_TYPE_wv:
+ return e.use_exact_insn (
+   code_for_pred_th_narrow (CODE, e.vector_mode ()));
+      default:
+ gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vncvt.  */
+class th_vncvt_x : public function_base
+{
+public:
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_trunc (e.vector_mode ()));
+  }
+};
+
+/* Implements vnclip/vnclipu.  */
+template<int UNSPEC>
+class th_vnclip : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override { return true; }
+
+  bool may_require_vxrm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_wx:
+ return e.use_exact_insn (
+   code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));
+      case OP_TYPE_wv:
+ return e.use_exact_insn (
+   code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));
+      default:
+ gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vcpop.  */
+class th_vcpop : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_popcount (e.vector_mode (), Pmode));
+  }
+};
+
+/* Implements vfirst.  */
+class th_vfirst : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_ffs (e.vector_mode (), Pmode));
+  }
+};
+
+/* Implements vmadc.  */
+class th_vmadc : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));
+      case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));
+      case OP_TYPE_vv:
+ return e.use_exact_insn (
+   code_for_pred_th_madc_overflow (e.vector_mode ()));
+      case OP_TYPE_vx:
+ return e.use_exact_insn (
+   code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));
+      default:
+ gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vmsbc.  */
+class th_vmsbc : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    switch (e.op_info->op)
+      {
+      case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));
+      case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));
+      case OP_TYPE_vv:
+ return e.use_exact_insn (
+   code_for_pred_th_msbc_overflow (e.vector_mode ()));
+      case OP_TYPE_vx:
+ return e.use_exact_insn (
+   code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));
+      default:
+ gcc_unreachable ();
+      }
+  }
+};
+
+/* Implements vfncvt.x.  */
+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_x : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+  }
+};
+
+template<enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_f : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  rtx expand (function_expander &e) const override
+  {
+    if (e.op_info->op == OP_TYPE_f_w)
+      return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+    if (e.op_info->op == OP_TYPE_x_w)
+      return e.use_exact_insn (
+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));
+    if (e.op_info->op == OP_TYPE_xu_w)
+      return e.use_exact_insn (
+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));
+    gcc_unreachable ();
+  }
+};
+
+/* Implements floating-point reduction instructions.  */
+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_freducop : public function_base
+{
+public:
+  bool has_rounding_mode_operand_p () const override
+  {
+    return FRM_OP == HAS_FRM;
+  }
+
+  bool may_require_frm_p () const override { return true; }
+
+  bool apply_mask_policy_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));
+  }
+};
+
+class th_vleff : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY | CP_WRITE_CSR;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  gimple *fold (gimple_folder &f) const override
+  {
+    return fold_fault_load (f);
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_contiguous_load_insn (
+      code_for_pred_th_fault_load (e.vector_mode ()));
+  }
+};
+
+/* Implements vlseg.v.  */
+class th_vlseg : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_unit_strided_load (e.vector_mode ()));
+  }
+};
+
+/* Implements vsseg.v.  */
+class th_vsseg : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_WRITE_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_unit_strided_store (e.vector_mode ()));
+  }
+};
+
+/* Implements vlsseg.v.  */
+class th_vlsseg : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_strided_load (e.vector_mode ()));
+  }
+};
+
+/* Implements vssseg.v.  */
+class th_vssseg : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_WRITE_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_strided_store (e.vector_mode ()));
+  }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_load : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_indexed_load (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+  }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_store : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_WRITE_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index) const override
+  {
+    return true;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_indexed_store (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+  }
+};
+
+/* Implements vlsegff.v.  */
+class th_vlsegff : public function_base
+{
+public:
+  unsigned int call_properties (const function_instance &) const override
+  {
+    return CP_READ_MEMORY | CP_WRITE_CSR;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    return pred != PRED_TYPE_none;
+  }
+
+  gimple *fold (gimple_folder &f) const override
+  {
+    return fold_fault_load (f);
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    return e.use_exact_insn (
+      code_for_pred_th_fault_load (e.vector_mode ()));
+  }
+};
+
+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;
+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;
+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;
+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;
+static CONSTEXPR const th_unop<NEG> th_vneg_obj;
+static CONSTEXPR const th_unop<NOT> th_vnot_obj;
+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;
+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;
+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;
+static CONSTEXPR const th_vcpop th_vcpop_obj;
+static CONSTEXPR const th_vfirst th_vfirst_obj;
+static CONSTEXPR const th_vmadc th_vmadc_obj;
+static CONSTEXPR const th_vmsbc th_vmsbc_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;
+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;
+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;
+static CONSTEXPR const th_vleff th_vleff_obj;
+static CONSTEXPR const th_vlseg th_vlseg_obj;
+static CONSTEXPR const th_vsseg th_vsseg_obj;
+static CONSTEXPR const th_vlsseg th_vlsseg_obj;
+static CONSTEXPR const th_vssseg th_vssseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;
+static CONSTEXPR const th_vlsegff th_vlsegff_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+   of class <NAME>_obj.  */
+#define BASE(NAME) \
+  namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vsetvl)
+BASE (th_vsetvlmax)
+BASE (th_vle)
+BASE (th_vse)
+BASE (th_vlm)
+BASE (th_vsm)
+BASE (th_vlse)
+BASE (th_vsse)
+BASE (th_vluxei8)
+BASE (th_vluxei16)
+BASE (th_vluxei32)
+BASE (th_vluxei64)
+BASE (th_vloxei8)
+BASE (th_vloxei16)
+BASE (th_vloxei32)
+BASE (th_vloxei64)
+BASE (th_vsuxei8)
+BASE (th_vsuxei16)
+BASE (th_vsuxei32)
+BASE (th_vsuxei64)
+BASE (th_vsoxei8)
+BASE (th_vsoxei16)
+BASE (th_vsoxei32)
+BASE (th_vsoxei64)
+BASE (th_vneg)
+BASE (th_vnot)
+BASE (th_vnsrl)
+BASE (th_vnsra)
+BASE (th_vncvt_x)
+BASE (th_vnclip)
+BASE (th_vnclipu)
+BASE (th_vcpop)
+BASE (th_vfirst)
+BASE (th_vmadc)
+BASE (th_vmsbc)
+BASE (th_vfncvt_x)
+BASE (th_vfncvt_x_frm)
+BASE (th_vfncvt_xu)
+BASE (th_vfncvt_xu_frm)
+BASE (th_vfncvt_f)
+BASE (th_vfncvt_f_frm)
+BASE (th_vfredusum)
+BASE (th_vfredusum_frm)
+BASE (th_vfredosum)
+BASE (th_vfredosum_frm)
+BASE (th_vfwredusum)
+BASE (th_vfwredusum_frm)
+BASE (th_vfwredosum)
+BASE (th_vfwredosum_frm)
+BASE (th_vleff)
+BASE (th_vlseg)
+BASE (th_vsseg)
+BASE (th_vlsseg)
+BASE (th_vssseg)
+BASE (th_vluxseg)
+BASE (th_vloxseg)
+BASE (th_vsuxseg)
+BASE (th_vsoxseg)
+BASE (th_vlsegff)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..d0bf00b8e81
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,92 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vsetvl;
+extern const function_base *const th_vsetvlmax;
+extern const function_base *const th_vle;
+extern const function_base *const th_vse;
+extern const function_base *const th_vlm;
+extern const function_base *const th_vsm;
+extern const function_base *const th_vlse;
+extern const function_base *const th_vsse;
+extern const function_base *const th_vluxei8;
+extern const function_base *const th_vluxei16;
+extern const function_base *const th_vluxei32;
+extern const function_base *const th_vluxei64;
+extern const function_base *const th_vloxei8;
+extern const function_base *const th_vloxei16;
+extern const function_base *const th_vloxei32;
+extern const function_base *const th_vloxei64;
+extern const function_base *const th_vsuxei8;
+extern const function_base *const th_vsuxei16;
+extern const function_base *const th_vsuxei32;
+extern const function_base *const th_vsuxei64;
+extern const function_base *const th_vsoxei8;
+extern const function_base *const th_vsoxei16;
+extern const function_base *const th_vsoxei32;
+extern const function_base *const th_vsoxei64;
+extern const function_base *const th_vneg;
+extern const function_base *const th_vnot;
+extern const function_base *const th_vnsrl;
+extern const function_base *const th_vnsra;
+extern const function_base *const th_vncvt_x;
+extern const function_base *const th_vnclip;
+extern const function_base *const th_vnclipu;
+extern const function_base *const th_vcpop;
+extern const function_base *const th_vfirst;
+extern const function_base *const th_vmadc;
+extern const function_base *const th_vmsbc;
+extern const function_base *const th_vfncvt_x;
+extern const function_base *const th_vfncvt_x_frm;
+extern const function_base *const th_vfncvt_xu;
+extern const function_base *const th_vfncvt_xu_frm;
+extern const function_base *const th_vfncvt_f;
+extern const function_base *const th_vfncvt_f_frm;
+extern const function_base *const th_vfredusum;
+extern const function_base *const th_vfredusum_frm;
+extern const function_base *const th_vfredosum;
+extern const function_base *const th_vfredosum_frm;
+extern const function_base *const th_vfwredusum;
+extern const function_base *const th_vfwredusum_frm;
+extern const function_base *const th_vfwredosum;
+extern const function_base *const th_vfwredosum_frm;
+extern const function_base *const th_vleff;
+extern const function_base *const th_vlseg;
+extern const function_base *const th_vsseg;
+extern const function_base *const th_vlsseg;
+extern const function_base *const th_vssseg;
+extern const function_base *const th_vluxseg;
+extern const function_base *const th_vloxseg;
+extern const function_base *const th_vsuxseg;
+extern const function_base *const th_vsoxseg;
+extern const function_base *const th_vlsegff;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..072fb5e68e1
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,2574 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_int_attr th_order [
+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")
+])
+
+(define_int_attr th_reduc_op [
+  (UNSPEC_REDUC_SUM "redsum")
+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")
+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")
+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")
+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")
+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")
+])
+
+(define_code_iterator neg_unop [neg])
+(define_code_iterator not_unop [not])
+
+(define_code_iterator any_float_unop_neg [neg])
+(define_code_iterator any_float_unop_abs [abs])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_expand "@pred_th_mov<mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+        [(match_operand:<VM> 1 "vector_mask_operand")
+         (match_operand 4 "vector_length_operand")
+         (match_operand 5 "const_int_operand")
+         (match_operand 6 "const_int_operand")
+         (match_operand 7 "const_int_operand")
+         (reg:SI VL_REGNUM)
+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+      (match_operand:V_VLS 3 "vector_move_operand")
+      (match_operand:V_VLS 2 "vector_merge_operand")))]
+  "TARGET_XTHEADVECTOR"
+  {})
+
+(define_insn_and_split "*pred_broadcast<mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")
+ (if_then_else:V_VLSI
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")
+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (vec_duplicate:V_VLSI
+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))
+   (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.x\t%0,%3
+   vmv.v.x\t%0,%3
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero
+   vlse.v\t%0,%3,zero
+   vmv.s.x\t%0,%3
+   vmv.s.x\t%0,%3"
+  "(register_operand (operands[3], <VEL>mode)
+  || CONST_POLY_INT_P (operands[3]))
+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
+  [(set (match_dup 0)
+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)
+      (match_dup 5) (match_dup 6) (match_dup 7)
+      (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (vec_duplicate:V_VLSI (match_dup 3))
+   (match_dup 2)))]
+  {
+    gcc_assert (can_create_pseudo_p ());
+    if (CONST_POLY_INT_P (operands[3]))
+      {
+ rtx tmp = gen_reg_rtx (<VEL>mode);
+ emit_move_insn (tmp, operands[3]);
+ operands[3] = tmp;
+      }
+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),
+ GET_MODE_ALIGNMENT (<VEL>mode));
+    m = validize_mem (m);
+    emit_move_insn (m, operands[3]);
+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));
+    operands[3] = m;
+
+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:
+       andi a2,a2,1
+       vsetvl zero,a2,e64
+       vlse64.v  */
+    if (satisfies_constraint_Wb1 (operands[1]))
+      {
+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);
+ operands[1] = CONSTM1_RTX (<VM>mode);
+      }
+  }
+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_broadcast<mode>"
+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")
+ (if_then_else:V_VLSF_ZVFHMIN
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")
+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (vec_duplicate:V_VLSF_ZVFHMIN
+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))
+   (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vfmv.v.f\t%0,%3
+   vfmv.v.f\t%0,%3
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero,%1.t
+   vlse.v\t%0,%3,zero
+   vlse.v\t%0,%3,zero
+   vfmv.s.f\t%0,%3
+   vfmv.s.f\t%0,%3"
+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
+   (set_attr "mode" "<MODE>")])
+
+;; vle.v/vse.v,vmv.v.v
+(define_insn_and_split "*pred_th_mov<mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")
+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")
+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")
+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")
+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")
+         (reg:SI VL_REGNUM)
+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")
+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]
+  "(TARGET_XTHEADVECTOR
+    && (register_operand (operands[0], <MODE>mode)
+        || register_operand (operands[3], <MODE>mode)))"
+  "@
+   vle.v\t%0,%3%p1
+   vle.v\t%0,%3
+   vle.v\t%0,%3,%1.t
+   vse.v\t%3,%0%p1
+   vmv.v.v\t%0,%3
+   vmv.v.v\t%0,%3"
+  "&& register_operand (operands[0], <MODE>mode)
+   && register_operand (operands[3], <MODE>mode)
+   && satisfies_constraint_vu (operands[2])
+   && INTVAL (operands[7]) == riscv_vector::VLMAX"
+  [(set (match_dup 0) (match_dup 3))]
+  ""
+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn_and_split "@pred_th_mov<mode>"
+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")
+ (if_then_else:VB_VLS
+   (unspec:VB_VLS
+     [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")
+      (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")
+      (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")
+   (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   #
+   #
+   vmcpy.m\t%0,%3
+   vmclr.m\t%0
+   vmset.m\t%0"
+  "&& !reload_completed"
+  [(const_int 0)]
+  {
+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))
+        || (REG_P (operands[0]) && REG_P (operands[3])
+     && INTVAL (operands[5]) == riscv_vector::VLMAX))
+      {
+ emit_move_insn (operands[0], operands[3]);
+ DONE;
+      }
+
+    FAIL;
+  }
+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_store<mode>"
+  [(set (match_operand:V 0 "memory_operand"                 "+m")
+ (if_then_else:V
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+      (match_operand 3 "vector_length_operand"    "   rK")
+      (match_operand 4 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operand:V 2 "register_operand"         "    vr")
+   (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vse.v\t%2,%0%p1"
+  [(set_attr "type" "vste")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 4))
+   (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_th_strided_load<mode>"
+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")
+ (if_then_else:V
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")
+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")
+      (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")
+      (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")
+      (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V
+     [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")
+      (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)
+   (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "@
+  vlse.v\t%0,%3,%z4%p1
+  vlse.v\t%0,%3,%z4
+  vlse.v\t%0,%3,%z4,%1.t
+  vle.v\t%0,%3%p1
+  vle.v\t%0,%3
+  vle.v\t%0,%3,%1.t"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")
+ (if_then_else:V
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK,       rK")
+      (match_operand 5 "const_int_operand"        "    i,        i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V
+     [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")
+      (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)
+   (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "@
+  vsse.v\t%3,%0,%z2%p1
+  vse.v\t%3,%0%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+
+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"
+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")
+ (if_then_else:V
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")
+      (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")
+      (match_operand 6 "const_int_operand"         "  i,  i, i,  i")
+      (match_operand 7 "const_int_operand"         "  i,  i, i,  i")
+      (match_operand 8 "const_int_operand"         "  i,  i, i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V
+     [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
+   (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+;; DEST eew is greater than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"
+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")
+ (if_then_else:VEEWEXT2
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "    i,    i")
+      (match_operand 7 "const_int_operand"                      "    i,    i")
+      (match_operand 8 "const_int_operand"                      "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VEEWEXT2
+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)
+   (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"
+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")
+ (if_then_else:VEEWEXT4
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "    i,    i")
+      (match_operand 7 "const_int_operand"                      "    i,    i")
+      (match_operand 8 "const_int_operand"                      "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VEEWEXT4
+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)
+   (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"
+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")
+ (if_then_else:VEEWEXT8
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "    i,    i")
+      (match_operand 7 "const_int_operand"                      "    i,    i")
+      (match_operand 8 "const_int_operand"                      "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VEEWEXT8
+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)
+   (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+;; DEST eew is smaller than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"
+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")
+ (if_then_else:VEEWTRUNC2
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VEEWTRUNC2
+     [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)
+   (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"
+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")
+ (if_then_else:VEEWTRUNC4
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VEEWTRUNC4
+     [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)
+   (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"
+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")
+ (if_then_else:VEEWTRUNC8
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VEEWTRUNC8
+     [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)
+   (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxe.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vld<order>x")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+    (match_operand:RATIO64I 2 "register_operand" "  vr")
+    (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO64:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+    (match_operand:RATIO32I 2 "register_operand" "  vr")
+    (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO32:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+    (match_operand:RATIO16I 2 "register_operand" "  vr")
+    (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO16:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+    (match_operand:RATIO8I 2 "register_operand" "  vr")
+    (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO8:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+    (match_operand:RATIO4I 2 "register_operand" "  vr")
+    (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO4:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")
+    (match_operand:RATIO2I 2 "register_operand"  "  vr")
+    (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO2:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")
+    (match_operand:RATIO1 2 "register_operand"   "  vr")
+    (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<RATIO1:MODE>")])
+
+(define_insn "@pred_th_popcount<VB:mode><P:mode>"
+  [(set (match_operand:P 0 "register_operand"               "=r")
+ (popcount:P
+   (unspec:VB
+     [(and:VB
+        (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+        (match_operand:VB 2 "register_operand"    "   vr"))
+      (match_operand 3 "vector_length_operand"    "   rK")
+      (match_operand 4 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
+  "TARGET_XTHEADVECTOR"
+  "vmpopc.m\t%0,%2%p1"
+  [(set_attr "type" "vmpop")
+   (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_ffs<VB:mode><P:mode>"
+  [(set (match_operand:P 0 "register_operand"                 "=r")
+ (plus:P
+   (ffs:P
+     (unspec:VB
+       [(and:VB
+          (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+          (match_operand:VB 2 "register_operand"    "   vr"))
+        (match_operand 3 "vector_length_operand"    "   rK")
+        (match_operand 4 "const_int_operand"        "    i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
+   (const_int -1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmfirst.m\t%0,%2%p1"
+  [(set_attr "type" "vmffs")
+   (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"
+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+ (if_then_else:<VNCONVERT>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+      (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:<VNCONVERT>
+      [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)
+   (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"
+  [(set_attr "type" "vfncvtftoi")
+   (set_attr "mode" "<VNCONVERT>")
+   (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<float_cvt><mode>"
+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+ (if_then_else:<VNCONVERT>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+      (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+   (any_float:<VNCONVERT>
+      (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))
+   (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vfncvt.f.x<u>.v\t%0,%3%p1"
+  [(set_attr "type" "vfncvtitof")
+   (set_attr "mode" "<VNCONVERT>")
+   (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<optab><mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (truncate:<V_DOUBLE_TRUNC>
+     (any_shiftrt:VWEXTI
+      (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")
+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))
+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vn<insn>.v%o4\t%0,%3,%v4%p1"
+  [(set_attr "type" "vnshift")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_<optab><mode>_scalar"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (truncate:<V_DOUBLE_TRUNC>
+     (any_shiftrt:VWEXTI
+      (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")
+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))
+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vn<insn>.v%o4\t%0,%3,%4%p1"
+  [(set_attr "type" "vnshift")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_trunc<mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (truncate:<V_DOUBLE_TRUNC>
+     (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))
+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vnsrl.vx\t%0,%3,x0%p1"
+  [(set_attr "type" "vnshift")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_trunc<mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+   (float_truncate:<V_DOUBLE_TRUNC>
+      (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))
+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vfncvt.f.f.v\t%0,%3%p1"
+  [(set_attr "type" "vfncvtftof")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")
+   (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_fault_load<mode>"
+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")
+ (if_then_else:V
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")
+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")
+      (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")
+      (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")
+      (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V
+     [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)
+   (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))
+   (set (reg:SI VL_REGNUM)
+   (unspec:SI
+     [(if_then_else:V
+        (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+        (unspec:V [(match_dup 3)] UNSPEC_VLEFF)
+        (match_dup 2))] UNSPEC_MODIFY_VL))]
+  "TARGET_XTHEADVECTOR"
+  "vleff.v\t%0,%3%p1"
+  [(set_attr "type" "vldff")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_load<mode>"
+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")
+ (if_then_else:VT
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")
+      (match_operand 5 "const_int_operand"        "    i,     i,     i")
+      (match_operand 6 "const_int_operand"        "    i,     i,     i")
+      (match_operand 7 "const_int_operand"        "    i,     i,     i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VT
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+      (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vlseg<nf>e.v\t%0,(%z3)%p1"
+  [(set_attr "type" "vlsegde")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_store<mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+       (match_operand 3 "vector_length_operand"    "   rK")
+       (match_operand 4 "const_int_operand"        "    i")
+       (reg:SI VL_REGNUM)
+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")
+    (match_operand:VT 2 "register_operand"         "   vr")
+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+  "TARGET_XTHEADVECTOR"
+  "vsseg<nf>e.v\t%2,(%z1)%p0"
+  [(set_attr "type" "vssegte")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_load<mode>"
+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")
+ (if_then_else:VT
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+      (match_operand 6 "const_int_operand"        "    i,     i,     i")
+      (match_operand 7 "const_int_operand"        "    i,     i,     i")
+      (match_operand 8 "const_int_operand"        "    i,     i,     i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VT
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+      (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+      (mem:BLK (scratch))] UNSPEC_STRIDED)
+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"
+  [(set_attr "type" "vlsegds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+       (match_operand 4 "vector_length_operand"    "   rK")
+       (match_operand 5 "const_int_operand"        "    i")
+       (reg:SI VL_REGNUM)
+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")
+    (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")
+    (match_operand:VT 3 "register_operand"         "   vr")
+    (mem:BLK (scratch))] UNSPEC_STRIDED))]
+  "TARGET_XTHEADVECTOR"
+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"
+  [(set_attr "type" "vssegts")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_fault_load<mode>"
+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")
+ (if_then_else:VT
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")
+      (match_operand 5 "const_int_operand"        "    i,     i,     i")
+      (match_operand 6 "const_int_operand"        "    i,     i,     i")
+      (match_operand 7 "const_int_operand"        "    i,     i,     i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:VT
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")
+      (mem:BLK (scratch))] UNSPEC_VLEFF)
+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))
+   (set (reg:SI VL_REGNUM)
+        (unspec:SI
+          [(if_then_else:VT
+      (unspec:<VM>
+        [(match_dup 1) (match_dup 4) (match_dup 5)
+         (match_dup 6) (match_dup 7)
+         (reg:SI VL_REGNUM)
+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+      (unspec:VT
+         [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
+      (match_dup 2))] UNSPEC_MODIFY_VL))]
+  "TARGET_XTHEADVECTOR"
+  "vlseg<nf>eff.v\t%0,(%z3)%p1"
+  [(set_attr "type" "vlsegdff")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"
+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")
+ (if_then_else:V1T
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"    "   rK,   rK")
+      (match_operand 6 "const_int_operand"        "    i,    i")
+      (match_operand 7 "const_int_operand"        "    i,    i")
+      (match_operand 8 "const_int_operand"        "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V1T
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)
+   (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"
+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")
+ (if_then_else:V2T
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"    "   rK,   rK")
+      (match_operand 6 "const_int_operand"        "    i,    i")
+      (match_operand 7 "const_int_operand"        "    i,    i")
+      (match_operand 8 "const_int_operand"        "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V2T
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)
+   (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"
+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")
+ (if_then_else:V4T
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"    "   rK,   rK")
+      (match_operand 6 "const_int_operand"        "    i,    i")
+      (match_operand 7 "const_int_operand"        "    i,    i")
+      (match_operand 8 "const_int_operand"        "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V4T
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)
+   (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"
+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")
+ (if_then_else:V8T
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"    "   rK,   rK")
+      (match_operand 6 "const_int_operand"        "    i,    i")
+      (match_operand 7 "const_int_operand"        "    i,    i")
+      (match_operand 8 "const_int_operand"        "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V8T
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)
+   (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"
+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")
+ (if_then_else:V16T
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"    "   rK,   rK")
+      (match_operand 6 "const_int_operand"        "    i,    i")
+      (match_operand 7 "const_int_operand"        "    i,    i")
+      (match_operand 8 "const_int_operand"        "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V16T
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)
+   (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"
+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")
+ (if_then_else:V32T
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"    "   rK,   rK")
+      (match_operand 6 "const_int_operand"        "    i,    i")
+      (match_operand 7 "const_int_operand"        "    i,    i")
+      (match_operand 8 "const_int_operand"        "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:V32T
+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")
+      (mem:BLK (scratch))
+      (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)
+   (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vlsegd<order>x")
+   (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+    (match_operand:RATIO64I 2 "register_operand"       "   vr")
+    (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+    (match_operand:RATIO32I 2 "register_operand"       "   vr")
+    (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+    (match_operand:RATIO16I 2 "register_operand"       "   vr")
+    (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+    (match_operand:RATIO8I 2 "register_operand"       "   vr")
+    (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+    (match_operand:RATIO4I 2 "register_operand"      "   vr")
+    (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"
+  [(set (mem:BLK (scratch))
+ (unspec:BLK
+   [(unspec:<VM>
+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+      (match_operand 4 "vector_length_operand"    "   rK")
+      (match_operand 5 "const_int_operand"        "    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")
+    (match_operand:RATIO2I 2 "register_operand"      "   vr")
+    (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]
+  "TARGET_XTHEADVECTOR"
+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";
+  [(set_attr "type" "vssegtux")
+   (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (any_float_unop_neg:V_VLSF
+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vfsgnjn.vv\t%0,%3,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (any_float_unop_abs:V_VLSF
+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vfsgnjx.vv\t%0,%3,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")
+      (match_operand 5 "const_int_operand"        " i, i,  i,  i")
+      (match_operand 6 "const_int_operand"        " i, i,  i,  i")
+      (match_operand 7 "const_int_operand"        " i, i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (not_unop:V_VLSI
+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))
+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vnot.v\t%0,%3%p1"
+  [(set_attr "type" "vialu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")
+      (match_operand 5 "const_int_operand" " i, i,  i,  i")
+      (match_operand 6 "const_int_operand" " i, i,  i,  i")
+      (match_operand 7 "const_int_operand" " i, i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (neg_unop:V_VLSI
+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))
+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vrsub.vx\t%0,%3,x0%p1"
+  [(set_attr "type" "vialu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+   (any_float_unop:V_VLSF
+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vf<insn>.v\t%0,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))
+   (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:<V_DOUBLE_TRUNC>
+     [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")
+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)
+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"
+  [(set_attr "type" "vnclip")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"
+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")
+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+   (unspec:<V_DOUBLE_TRUNC>
+     [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")
+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)
+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR"
+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"
+  [(set_attr "type" "vnclip")
+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; Float Reduction Sum (vfred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")
+ (unspec:<V_LMUL1>
+   [(unspec:<VM>
+     [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")
+      (match_operand               5 "vector_length_operand" "   rK,   rK")
+      (match_operand               6 "const_int_operand"     "    i,    i")
+      (match_operand               7 "const_int_operand"     "    i,    i")
+      (match_operand               8 "const_int_operand"     "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+           (unspec:<V_LMUL1> [
+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")
+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")
+           ] ANY_FREDUC_SUM)
+    (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
+  "TARGET_XTHEADVECTOR"
+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+  [(set_attr "type" "vfred<order>")
+   (set_attr "mode" "<MODE>")
+   (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")
+ (unspec:<V_EXT_LMUL1>
+   [(unspec:<VM>
+     [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")
+      (match_operand                5 "vector_length_operand" "   rK,   rK")
+      (match_operand                6 "const_int_operand"     "    i,    i")
+      (match_operand                7 "const_int_operand"     "    i,    i")
+      (match_operand                8 "const_int_operand"     "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)
+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+           (unspec:<V_EXT_LMUL1> [
+      (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")
+      (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")
+           ] ANY_FWREDUC_SUM)
+    (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]
+  "TARGET_XTHEADVECTOR"
+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+  [(set_attr "type" "vfwred<order>")
+   (set_attr "mode" "<MODE>")
+   (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_madc<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")
+ (unspec:<VM>
+    [(plus:VI
+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")
+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))
+     (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")
+        (match_operand 5 "const_int_operand"     "   i,   i,   i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.v%o2m\t%0,%1,%v2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")
+ (unspec:<VM>
+    [(minus:VI
+      (match_operand:VI 1 "register_operand"     "  vr")
+      (match_operand:VI 2 "register_operand"     " vr"))
+     (match_operand:<VM> 3 "register_operand"    " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand" " rK")
+        (match_operand 5 "const_int_operand"     "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vvm\t%0,%1,%2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(plus:VI_QHS
+      (vec_duplicate:VI_QHS
+        (match_operand:<VEL> 2 "register_operand" "  r"))
+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+     (match_operand:<VM> 3 "register_operand"     " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand"  " rK")
+        (match_operand 5 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vxm\t%0,%1,%2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(minus:VI_QHS
+      (vec_duplicate:VI_QHS
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+     (match_operand:<VM> 3 "register_operand"     " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand"  " rK")
+        (match_operand 5 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_madc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+    [(plus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_int_operand"))
+      (match_operand:VI_D 1 "register_operand"))
+     (match_operand:<VM> 3 "register_operand")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand")
+        (match_operand 5 "const_int_operand")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+   emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],
+        boardcast_scalar, operands[3], operands[4], operands[5]));
+        },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+    DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(plus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_D 1 "register_operand"    "  vr"))
+     (match_operand:<VM> 3 "register_operand"     " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand"  " rK")
+        (match_operand 5 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_madc<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")
+ (unspec:<VM>
+    [(plus:VI_D
+      (vec_duplicate:VI_D
+        (sign_extend:<VEL>
+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+      (match_operand:VI_D 1 "register_operand"         "  vr"))
+     (match_operand:<VM> 3 "register_operand"          " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand"       " rK")
+        (match_operand 5 "const_int_operand"           "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_msbc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+    [(minus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_int_operand"))
+      (match_operand:VI_D 1 "register_operand"))
+     (match_operand:<VM> 3 "register_operand")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand")
+        (match_operand 5 "const_int_operand")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+   emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],
+        boardcast_scalar, operands[3], operands[4], operands[5]));
+        },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+    DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(minus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_D 1 "register_operand"    "  vr"))
+     (match_operand:<VM> 3 "register_operand"     " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand"  " rK")
+        (match_operand 5 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_msbc<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")
+ (unspec:<VM>
+    [(minus:VI_D
+      (vec_duplicate:VI_D
+        (sign_extend:<VEL>
+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+      (match_operand:VI_D 1 "register_operand"         "  vr"))
+     (match_operand:<VM> 3 "register_operand"          " vm")
+     (unspec:<VM>
+       [(match_operand 4 "vector_length_operand"       " rK")
+        (match_operand 5 "const_int_operand"           "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vxm\t%0,%1,%z2,%3"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_overflow"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")
+ (unspec:<VM>
+    [(plus:VI
+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")
+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")
+        (match_operand 4 "const_int_operand"     "   i,   i,   i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.v%o2\t%0,%1,%v2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(minus:VI
+      (match_operand:VI 1 "register_operand"     "   vr")
+      (match_operand:VI 2 "register_operand"     "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand" "  rK")
+        (match_operand 4 "const_int_operand"     "   i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vv\t%0,%1,%2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_madc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(plus:VI_QHS
+      (vec_duplicate:VI_QHS
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand"  " rK")
+        (match_operand 4 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(minus:VI_QHS
+      (vec_duplicate:VI_QHS
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand"  " rK")
+        (match_operand 4 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_madc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+    [(plus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_int_operand"))
+      (match_operand:VI_D 1 "register_operand"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand")
+        (match_operand 4 "const_int_operand")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+   emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],
+        boardcast_scalar, operands[3], operands[4]));
+        },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+    DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(plus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_D 1 "register_operand"    "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand"  " rK")
+        (match_operand 4 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")
+ (unspec:<VM>
+    [(plus:VI_D
+      (vec_duplicate:VI_D
+        (sign_extend:<VEL>
+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+      (match_operand:VI_D 1 "register_operand"         "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand"       " rK")
+        (match_operand 4 "const_int_operand"           "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmadc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_msbc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+    [(minus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_int_operand"))
+      (match_operand:VI_D 1 "register_operand"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand")
+        (match_operand 4 "const_int_operand")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+{
+  if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+   emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],
+        boardcast_scalar, operands[3], operands[4]));
+        },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+    DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_overflow_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")
+ (unspec:<VM>
+    [(minus:VI_D
+      (vec_duplicate:VI_D
+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+      (match_operand:VI_D 1 "register_operand"    "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand"  " rK")
+        (match_operand 4 "const_int_operand"      "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")
+ (unspec:<VM>
+    [(minus:VI_D
+      (vec_duplicate:VI_D
+        (sign_extend:<VEL>
+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+      (match_operand:VI_D 1 "register_operand"         "  vr"))
+     (unspec:<VM>
+       [(match_operand 3 "vector_length_operand"      " rK")
+        (match_operand 4 "const_int_operand"          "  i")
+        (reg:SI VL_REGNUM)
+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+  "TARGET_XTHEADVECTOR"
+  "vmsbc.vx\t%0,%1,%z2"
+  [(set_attr "type" "vicalu")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "3")
+   (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")
+    (match_operand 4 "const_int_operand" "i")
+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)
+     (match_dup 4)
+     (match_dup 5)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "*th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "*th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_operand 3 "const_int_operand" "i")
+     (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))
+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")
+    (match_operand 4 "const_int_operand" "i")
+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)
+      (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)
+       (match_dup 5)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
+
+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"        "   0")
+      (match_operand 5 "vector_length_operand"        "  rK")
+      (match_operand 6 "const_int_operand"            "   i")
+      (match_operand 7 "const_int_operand"            "   i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "comparison_except_ltge_operator"
+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")
+       (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_ltge_operator"
+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")
+       (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_ltge_operator"
+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")
+       (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"        "   0")
+      (match_operand 5 "vector_length_operand"        "  rK")
+      (match_operand 6 "const_int_operand"            "   i")
+      (match_operand 7 "const_int_operand"            "   i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "ltge_operator"
+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")
+       (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_ltge<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "ltge_operator"
+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")
+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_ltge<mode>_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "ltge_operator"
+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")
+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.v%o5\t%0,%4,%v5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"          "  0")
+      (match_operand 5 "vector_length_operand"          " rK")
+      (match_operand 6 "const_int_operand"              "  i")
+      (match_operand 7 "const_int_operand"              "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")
+       (vec_duplicate:V_VLSI_QHS
+         (match_operand:<VEL> 4 "register_operand"      "  r"))])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")
+       (vec_duplicate:V_VLSI_QHS
+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")
+       (vec_duplicate:V_VLSI_QHS
+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])
+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"           "  0")
+      (match_operand 5 "vector_length_operand"           " rK")
+      (match_operand 6 "const_int_operand"               "  i")
+      (match_operand 7 "const_int_operand"               "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "equality_operator"
+      [(vec_duplicate:V_VLSI_QHS
+         (match_operand:<VEL> 4 "register_operand"       "  r"))
+       (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSI_QHS
+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))
+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSI_QHS
+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))
+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"           "  0")
+      (match_operand 5 "vector_length_operand"           " rK")
+      (match_operand 6 "const_int_operand"               "  i")
+      (match_operand 7 "const_int_operand"               "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_D 3 "register_operand"          " vr")
+       (vec_duplicate:V_VLSI_D
+         (match_operand:<VEL> 4 "register_operand"       "  r"))])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"           "  0")
+      (match_operand 5 "vector_length_operand"           " rK")
+      (match_operand 6 "const_int_operand"               "  i")
+      (match_operand 7 "const_int_operand"               "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "equality_operator"
+      [(vec_duplicate:V_VLSI_D
+         (match_operand:<VEL> 4 "register_operand"       "  r"))
+       (match_operand:V_VLSI_D 3 "register_operand"          " vr")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")
+       (vec_duplicate:V_VLSI_D
+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")
+       (vec_duplicate:V_VLSI_D
+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])
+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSI_D
+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))
+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSI_D
+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))
+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"          "  0")
+      (match_operand 5 "vector_length_operand"          " rK")
+      (match_operand 6 "const_int_operand"              "  i")
+      (match_operand 7 "const_int_operand"              "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_D 3 "register_operand"         " vr")
+       (vec_duplicate:V_VLSI_D
+         (sign_extend:<VEL>
+           (match_operand:<VSUBEL> 4 "register_operand" "  r")))])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"          "   rK,   rK")
+      (match_operand 7 "const_int_operand"              "    i,    i")
+      (match_operand 8 "const_int_operand"              "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")
+       (vec_duplicate:V_VLSI_D
+         (sign_extend:<VEL>
+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])
+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "comparison_except_eqge_operator"
+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")
+       (vec_duplicate:V_VLSI_D
+         (sign_extend:<VEL>
+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])
+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"            "  0")
+      (match_operand 5 "vector_length_operand"            " rK")
+      (match_operand 6 "const_int_operand"                "  i")
+      (match_operand 7 "const_int_operand"                "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "equality_operator"
+      [(vec_duplicate:V_VLSI_D
+         (sign_extend:<VEL>
+           (match_operand:<VSUBEL> 4 "register_operand"   "  r")))
+       (match_operand:V_VLSI_D 3 "register_operand"           " vr")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vms%B2.vx\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_extended_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"          "   rK,   rK")
+      (match_operand 7 "const_int_operand"              "    i,    i")
+      (match_operand 8 "const_int_operand"              "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSI_D
+         (sign_extend:<VEL>
+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))
+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSI_D
+         (sign_extend:<VEL>
+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))
+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vms%B3.vx\t%0,%4,%5%p1"
+  [(set_attr "type" "vicmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "signed_order_operator"
+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")
+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vmf%B3.vv\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"          "  0")
+      (match_operand 5 "vector_length_operand"          " rK")
+      (match_operand 6 "const_int_operand"              "  i")
+      (match_operand 7 "const_int_operand"              "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "signed_order_operator"
+      [(match_operand:V_VLSF 3 "register_operand"           " vr")
+       (match_operand:V_VLSF 4 "register_operand"           " vr")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmf%B2.vv\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "signed_order_operator"
+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")
+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vmf%B3.vv\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"         "  0")
+      (match_operand 5 "vector_length_operand"         " rK")
+      (match_operand 6 "const_int_operand"             "  i")
+      (match_operand 7 "const_int_operand"             "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "signed_order_operator"
+      [(match_operand:V_VLSF 3 "register_operand"      " vr")
+       (vec_duplicate:V_VLSF
+         (match_operand:<VEL> 4 "register_operand"     "  f"))])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmf%B2.vf\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "signed_order_operator"
+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")
+       (vec_duplicate:V_VLSF
+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "signed_order_operator"
+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")
+       (vec_duplicate:V_VLSF
+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "register_operand"         "  0")
+      (match_operand 5 "vector_length_operand"         " rK")
+      (match_operand 6 "const_int_operand"             "  i")
+      (match_operand 7 "const_int_operand"             "  i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 2 "equality_operator"
+      [(vec_duplicate:V_VLSF
+         (match_operand:<VEL> 4 "register_operand"     "  f"))
+       (match_operand:V_VLSF 3 "register_operand"      " vr")])
+   (match_dup 1)))]
+  "TARGET_XTHEADVECTOR"
+  "vmf%B2.vf\t%0,%3,%4,v0.t"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")
+   (set_attr "merge_op_idx" "1")
+   (set_attr "vl_op_idx" "5")
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSF
+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))
+       (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")
+ (if_then_else:<VM>
+   (unspec:<VM>
+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")
+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")
+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+   (match_operator:<VM> 3 "equality_operator"
+      [(vec_duplicate:V_VLSF
+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))
+       (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])
+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]
+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+  "vmf%B3.vf\t%0,%4,%5%p1"
+  [(set_attr "type" "vfcmp")
+   (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@ (define_mode_iterator VI [
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@ (define_mode_iterator VF [
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@ (define_attr "sew" ""
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@ (define_attr "vlmul" ""
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@ (define_attr "ratio" ""
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 14:00     ` 钟居哲
@ 2023-12-20 14:24       ` joshua
  2023-12-20 14:27         ` 钟居哲
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-20 14:24 UTC (permalink / raw)
  To: 钟居哲, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 234262 bytes --]

Hi Juzhe,
The patterns you supposed redundant are all necessary, because they generate different instructions from vector.
Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions, 
and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月20日(星期三) 22:00
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
Why do you add these ?
+(define_insn "@pred_th_unit_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 2 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vsseg<nf>e.v\t%2,(%z1)%p0"
+ [(set_attr "type" "vssegte")
+ (set_attr "mode" "<MODE>")])
These patterns are redundant just names are different.
They should be removed.
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >
Date: 2023-12-20 20:34
To: gcc-patches <mailto:gcc-patches@gcc.gnu.org >
CC: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; jeffreyalaw <mailto:jeffreyalaw@gmail.com >; christoph.muellner <mailto:christoph.muellner@vrull.eu >; juzhe.zhong <mailto:juzhe.zhong@rivai.ai >; Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >; Jin Ma <mailto:jinma@linux.alibaba.com >; Xianmiao Qu <mailto:cooper.qu@linux.alibaba.com >
Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
This patch is to handle the differences in instruction generation
between Vector and XTheadVector, adding th. prefix
to all XTheadVector instructions is not included.
For some vector patterns that cannot be avoided, we use
!TARGET_XTHEADVECTOR to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
gcc/ChangeLog:
 * config.gcc: Add files for XTheadVector intrinsics.
 * config/riscv/autovec.md: Guard XTheadVector.
 * config/riscv/riscv-string.cc (expand_block_move):
 Guard XTheadVector.
 * config/riscv/riscv-v.cc (legitimize_move):
 New expansion.
 (get_prefer_tail_policy): Give specific value for tail.
 (get_prefer_mask_policy): Give specific value for mask.
 (vls_mode_valid_p): Avoid autovec.
 * config/riscv/riscv-vector-builtins-shapes.cc (check_type):
 (build_one): New function.
 * config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
 (DEF_THEAD_RVV_FUNCTION): Add new marcos.
 (check_required_extensions):
 (handle_pragma_vector):
 * config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
 (RVV_REQUIRE_XTHEADVECTOR):
 Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
 (struct function_group_info):
 * config/riscv/riscv-vector-switch.def (ENTRY):
 Disable fractional mode for the XTheadVector extension.
 (TUPLE_ENTRY): Likewise.
 * config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
 * config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
 Guard XTheadVector.
 (riscv_v_adjust_bytesize): Likewise.
 (riscv_preferred_simd_mode): Likewsie.
 (riscv_autovectorize_vector_modes): Likewise.
 (riscv_vector_mode_supported_any_target_p): Likewise.
 (TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
 * config/riscv/t-riscv: Add new files.
 * config/riscv/vector-iterators.md: Remove fractional LMUL.
 * config/riscv/vector.md: Include thead-vector.md.
 * config/riscv/riscv_th_vector.h: New file.
 * config/riscv/thead-vector-builtins-functions.def: New file.
 * config/riscv/thead-vector-builtins.cc: New file.
 * config/riscv/thead-vector-builtins.h: New file.
 * config/riscv/thead-vector.md: New file.
gcc/testsuite/ChangeLog:
 * gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
 * gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
 * lib/target-supports.exp: Add target for XTheadVector.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc | 4 +-
 gcc/config/riscv/autovec.md | 2 +-
 gcc/config/riscv/predicates.md | 8 +-
 gcc/config/riscv/riscv-string.cc | 3 +
 gcc/config/riscv/riscv-v.cc | 13 +-
 .../riscv/riscv-vector-builtins-shapes.cc | 23 +
 gcc/config/riscv/riscv-vector-builtins.cc | 7 +
 gcc/config/riscv/riscv-vector-builtins.h | 5 +-
 gcc/config/riscv/riscv-vector-switch.def | 150 +-
 gcc/config/riscv/riscv.cc | 20 +-
 gcc/config/riscv/riscv_th_vector.h | 49 +
 gcc/config/riscv/t-riscv | 16 +
 .../riscv/thead-vector-builtins-functions.def | 627 ++++
 gcc/config/riscv/thead-vector-builtins.cc | 746 +++++
 gcc/config/riscv/thead-vector-builtins.h | 92 +
 gcc/config/riscv/thead-vector.md | 2574 +++++++++++++++++
 gcc/config/riscv/vector-iterators.md | 186 +-
 gcc/config/riscv/vector.md | 36 +-
 .../gcc.target/riscv/rvv/base/abi-1.c | 2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
 gcc/testsuite/lib/target-supports.exp | 12 +
 21 files changed, 4386 insertions(+), 191 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/config/riscv/thead-vector.md
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,9 +547,9 @@ riscv*)
 extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
 extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
 extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
- extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
 d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
 target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 ;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"
 [(match_operand 0 "register_operand")
 (match_operand 1 "memory_operand")
 (match_operand:ANYI 2 "const_int_operand")]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 {
 riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 1a3a4f1ecbb..d910367e59c 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@ (define_predicate "csr_operand"
 (match_operand 0 "register_operand")))
 (define_predicate "vector_csr_operand"
- (ior (match_operand 0 "const_csr_operand")
- (match_operand 0 "register_operand")))
+ (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+ (match_operand 0 "const_csr_operand"))
+ (match_operand 0 "register_operand")))
 ;; V has 32-bit unsigned immediates. This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
 (ior (match_operand 0 "pmode_register_operand")
- (match_operand 0 "const_csr_operand")))
+ (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+ (match_operand 0 "const_csr_operand"))))
 (define_special_predicate "autovec_length_operand"
 (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 bnez a2, loop # Any more?
 ret # Return
 */
+ if (TARGET_XTHEADVECTOR)
+ return false;
+
 gcc_assert (TARGET_VECTOR);
 HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 486f5deb296..710332e17db 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)
 return true;
 }
+ if (TARGET_XTHEADVECTOR)
+ {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+ RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+ }
+
 if (riscv_v_ext_vls_mode_p (mode))
 {
 if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()
 compiler pick up either agnostic or undisturbed. Maybe we
 will have a compile option like -mprefer=agnostic to set
 this value???. */
- return TAIL_ANY;
+ return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 /* Get prefer mask policy. */
@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()
 compiler pick up either agnostic or undisturbed. Maybe we
 will have a compile option like -mprefer=agnostic to set
 this value???. */
- return MASK_ANY;
+ return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 /* Get avl_type rtx. */
@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
- if (!TARGET_VECTOR)
+ if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
 return false;
 if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+ valid for the function. */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+ tree arg;
+ unsigned i;
+
+ if (!return_type)
+ return false;
+
+ FOR_EACH_VEC_ELT (argument_types, i, arg)
+ if (!arg)
+ return false;
+
+ return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
 mode suffix at index PAIR && bi and predication suffix at index pred_idx. */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
 group.ops_infos.types[vec_type_idx].index);
 b.allocate_argument_types (function_instance, argument_types);
 b.apply_predication (function_instance, return_type, argument_types);
+
+ if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+ return;
+
 b.add_overloaded_function (function_instance, *group.shape);
 b.add_unique_function (function_instance, (*group.shape), return_type,
 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..f5f9000d89c 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
 using namespace riscv_vector;
@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \
 {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
 #include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \
+ {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO) \
+ {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
 };
 /* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..bb463510dd2 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
 ZVKNHB_EXT, /* Crypto vector Zvknhb sub-ext */
 ZVKSED_EXT, /* Crypto vector Zvksed sub-ext */
 ZVKSH_EXT, /* Crypto vector Zvksh sub-ext */
+ XTHEADVECTOR_EXT, /* XTheadVector extension */
 };
 /* Enumerates the RVV operand types. */
@@ -233,7 +234,7 @@ struct function_group_info
 switch (ext_value)
 {
 case VECTOR_EXT:
- return TARGET_VECTOR;
+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);
 case ZVBB_EXT:
 return TARGET_ZVBB;
 case ZVBB_OR_ZVKB_EXT:
@@ -252,6 +253,8 @@ struct function_group_info
 return TARGET_ZVKSED;
 case ZVKSH_EXT:
 return TARGET_ZVKSH;
+ case XTHEADVECTOR_EXT:
+ return TARGET_XTHEADVECTOR;
 default:
 gcc_unreachable ();
 }
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 /* Disable modes if TARGET_MIN_VLEN == 32. */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 /* Disable modes if TARGET_MIN_VLEN == 32. */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 /* Disable modes if TARGET_MIN_VLEN == 32. */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 /* Disable modes if !TARGET_VECTOR_ELEN_64. */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index d3010bed8d8..18cc64b63e6 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
 if (riscv_v_ext_vector_mode_p (mode))
 {
+ if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
 poly_int64 nunits = GET_MODE_NUNITS (mode);
 poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
- if (TARGET_VECTOR)
+ if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
 return riscv_vector::preferred_simd_mode (mode);
 return word_mode;
@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
- if (TARGET_VECTOR)
+ if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
 return riscv_vector::autovectorize_vector_modes (modes, all);
 return default_autovectorize_vector_modes (modes, all);
@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 return false;
 }
+/* Implements target hook vector_mode_supported_any_target_p. */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+ if (TARGET_XTHEADVECTOR)
+ return false;
+ return true;
+}
+
 /* Initialize the GCC target structure. */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published
+ by the Free Software Foundation; either version 3, or (at your
+ option) any later version.
+
+ GCC is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+ License for more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short. It does
+ not define the RVV types and intrinsic functions directly in C and C++
+ code, but instead uses the following pragma to tell GCC to insert the
+ necessary type and function definitions itself. The net effect is the
+ same, and the file is a complete implementation of riscv_th_vector.h. */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
 $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
 $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
 $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+ $(srcdir)/config/riscv/thead-vector-builtins.h \
+ $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
 $(RISCV_BUILTINS_H)
 $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 $(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
 $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 $(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
+thead-vector-builtins.o: \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc \
+ $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+ $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+ $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+ gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+ rtx-vector-builder.h \
+ $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+ $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+ $(srcdir)/config/riscv/thead-vector-builtins.h \
+ $(RISCV_BUILTINS_H)
+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc
+
 riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
 $(SYSTEM_H) $(TM_H)
 $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..a85ca24cb31
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,627 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#ifndef DEF_THEAD_RVV_FUNCTION
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+/* Internal helper functions for gimple fold use. */
+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)
+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)
+
+/* 6. Configuration-Setting Instructions. */
+
+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)
+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+
+/* 7. Vector Loads and Stores. */
+
+// 7.4. Vector Unit-Stride Instructions
+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+
+// 7.5. Vector Strided Instructions
+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
+
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+
+// 7.7. Unit-stride Fault-Only-First Loads
+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)
+
+// TODO: 7.8. Vector Load/Store Segment Instructions
+
+/* 11. Vector Integer Arithmetic Instructions. */
+
+// 11.1. Vector Single-Width Integer Add and Subtract
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)
+
+// 11.2. Vector Widening Integer Add/Subtract
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)
+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)
+
+// 11.3. Vector Integer Extension
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)
+
+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)
+
+// 11.5. Vector Bitwise Logical Instructions
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)
+
+// 11.6. Vector Single-Width Shift Instructions
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)
+
+// 11.7. Vector Narrowing Integer Right Shift Instructions
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)
+
+// 11.8. Vector Integer Compare Instructions
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)
+
+// 11.9. Vector Integer Min/Max Instructions
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)
+
+// 11.10. Vector Single-Width Integer Multiply Instructions
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)
+
+// 11.11. Vector Integer Divide Instructions
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)
+
+// 11.12. Vector Widening Integer Multiply Instructions
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)
+
+// 11.13. Vector Single-Width Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)
+
+// 11.14. Vector Widening Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)
+
+// 11.15. Vector Integer Merge Instructions
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+
+// 11.16 Vector Integer Move Instructions
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)
+
+/* 12. Vector Fixed-Point Arithmetic Instructions. */
+
+// 12.1. Vector Single-Width Saturating Add and Subtract
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)
+
+// 12.2. Vector Single-Width Averaging Add and Subtract
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)
+
+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)
+
+// 12.4. Vector Single-Width Scaling Shift Instructions
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)
+
+// 12.5. Vector Narrowing Fixed-Point Clip Instructions
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+
+/* 13. Vector Floating-Point Instructions. */
+
+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)
+
+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.5. Vector Widening Floating-Point Multiply
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)
+
+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)
+
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+
+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)
+
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+
+// 13.8. Vector Floating-Point Square-Root Instruction
+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)
+
+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.11. Vector Floating-Point MIN/MAX Instructions
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)
+
+// 13.12. Vector Floating-Point Sign-Injection Instructions
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)
+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)
+
+// 13.13. Vector Floating-Point Compare Instructions
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)
+
+// 13.14. Vector Floating-Point Classify Instruction
+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)
+
+// 13.15. Vector Floating-Point Merge Instruction
+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)
+
+// 13.16. Vector Floating-Point Move Instruction
+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)
+
+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)
+
+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)
+
+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)
+
+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)
+
+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)
+
+/* 14. Vector Reduction Operations. */
+
+// 14.1. Vector Single-Width Integer Reduction Instructions
+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)
+
+// 14.2. Vector Widening Integer Reduction Instructions
+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)
+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)
+
+// 14.3. Vector Single-Width Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+
+// 14.4. Vector Widening Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+
+/* 15. Vector Mask Instructions. */
+
+// 15.1. Vector Mask-Register Logical Instructions
+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)
+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)
+// 15.2. Vector count population in mask vcpop.m
+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)
+// 15.3. vfirst find-first-set mask bit
+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)
+// 15.4. vmsbf.m set-before-first mask bit
+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.5. vmsif.m set-including-first mask bit
+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.6. vmsof.m set-only-first mask bit
+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.8. Vector Iota Instruction
+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)
+// 15.9. Vector Element Index Instruction
+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)
+
+/* 16. Vector Permutation Instructions. */
+
+// 16.1. Integer Scalar Move Instructions
+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)
+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)
+
+// 16.2. Floating-Point Scalar Move Instructions
+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)
+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)
+
+// 16.3. Vector Slide Instructions
+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)
+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)
+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)
+
+// 16.4. Vector Register Gather Instructions
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)
+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)
+
+// 16.5. Vector Compress Instruction
+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)
+
+/* Miscellaneous Vector Functions. */
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)
+
+// Tuple types
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)
+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
+#undef DEF_THEAD_RVV_FUNCTION
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..9d84ed39937
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,746 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+ for GNU compiler.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+ Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+ Semiconductor Co., Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements vsetvl<mode> && vsetvlmax<mode>. */
+template<bool VLMAX_P>
+class th_vsetvl : public function_base
+{
+public:
+ bool apply_vl_p () const override
+ {
+ return false;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (VLMAX_P)
+ e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));
+ else
+ e.add_input_operand (0);
+
+ tree type = builtin_types[e.type.index].vector;
+ machine_mode mode = TYPE_MODE (type);
+
+ machine_mode inner_mode = GET_MODE_INNER (mode);
+ /* SEW. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));
+
+ /* LMUL. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_vlmul (mode), Pmode));
+
+ /* TAIL_ANY. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_tail_policy (), Pmode));
+
+ /* MASK_ANY. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_mask_policy (), Pmode));
+ return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+ }
+};
+
+/* Implements
+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v
+ * codegen. */
+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>
+class th_loadstore : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return !STORE_P; }
+ bool apply_mask_policy_p () const override { return !STORE_P; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ if (STORE_P)
+ return CP_WRITE_MEMORY;
+ else
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ if (STORE_P || LST_TYPE == LST_INDEXED)
+ return true;
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (LST_TYPE == LST_INDEXED)
+ {
+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;
+ if (STORE_P)
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_store (unspec, e.vector_mode (),
+ e.index_mode ()));
+ else
+ {
+ unsigned src_eew_bitsize
+ = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));
+ unsigned dst_eew_bitsize
+ = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));
+ if (dst_eew_bitsize == src_eew_bitsize)
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_same_eew (
+ unspec, e.vector_mode ()));
+ }
+ else if (dst_eew_bitsize > src_eew_bitsize)
+ {
+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;
+ switch (factor)
+ {
+ case 2:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x2_greater_eew (
+ unspec, e.vector_mode ()));
+ case 4:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x4_greater_eew (
+ unspec, e.vector_mode ()));
+ case 8:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x8_greater_eew (
+ unspec, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;
+ switch (factor)
+ {
+ case 2:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x2_smaller_eew (
+ unspec, e.vector_mode ()));
+ case 4:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x4_smaller_eew (
+ unspec, e.vector_mode ()));
+ case 8:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x8_smaller_eew (
+ unspec, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+ }
+ }
+ else if (LST_TYPE == LST_STRIDED)
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_th_strided_store (e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_th_strided_load (e.vector_mode ()));
+ }
+ else
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_th_store (e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_mov (e.vector_mode ()));
+ }
+ }
+};
+
+/* Implements vneg/vnot. */
+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>
+class th_unop : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));
+ }
+};
+
+/* Implements vnsrl/vnsra. */
+template<rtx_code CODE>
+class th_vnshift : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_wx:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (CODE, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vncvt. */
+class th_vncvt_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+ }
+};
+
+/* Implements vnclip/vnclipu. */
+template<int UNSPEC>
+class th_vnclip : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override { return true; }
+
+ bool may_require_vxrm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_wx:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vcpop. */
+class th_vcpop : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_popcount (e.vector_mode (), Pmode));
+ }
+};
+
+/* Implements vfirst. */
+class th_vfirst : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_ffs (e.vector_mode (), Pmode));
+ }
+};
+
+/* Implements vmadc. */
+class th_vmadc : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));
+ case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_th_madc_overflow (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (
+ code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vmsbc. */
+class th_vmsbc : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));
+ case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_th_msbc_overflow (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (
+ code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vfncvt.x. */
+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_x : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+ }
+};
+
+template<enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_f : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_f_w)
+ return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_x_w)
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));
+ if (e.op_info->op == OP_TYPE_xu_w)
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements floating-point reduction instructions. */
+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_freducop : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ bool apply_mask_policy_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));
+ }
+};
+
+class th_vleff : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY | CP_WRITE_CSR;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ gimple *fold (gimple_folder &f) const override
+ {
+ return fold_fault_load (f);
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_contiguous_load_insn (
+ code_for_pred_th_fault_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vlseg.v. */
+class th_vlseg : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_unit_strided_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vsseg.v. */
+class th_vsseg : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_unit_strided_store (e.vector_mode ()));
+ }
+};
+
+/* Implements vlsseg.v. */
+class th_vlsseg : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_strided_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vssseg.v. */
+class th_vssseg : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_strided_store (e.vector_mode ()));
+ }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_load : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+ }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_store : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_store (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+ }
+};
+
+/* Implements vlsegff.v. */
+class th_vlsegff : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY | CP_WRITE_CSR;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ gimple *fold (gimple_folder &f) const override
+ {
+ return fold_fault_load (f);
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_fault_load (e.vector_mode ()));
+ }
+};
+
+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;
+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;
+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;
+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;
+static CONSTEXPR const th_unop<NEG> th_vneg_obj;
+static CONSTEXPR const th_unop<NOT> th_vnot_obj;
+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;
+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;
+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;
+static CONSTEXPR const th_vcpop th_vcpop_obj;
+static CONSTEXPR const th_vfirst th_vfirst_obj;
+static CONSTEXPR const th_vmadc th_vmadc_obj;
+static CONSTEXPR const th_vmsbc th_vmsbc_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;
+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;
+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;
+static CONSTEXPR const th_vleff th_vleff_obj;
+static CONSTEXPR const th_vlseg th_vlseg_obj;
+static CONSTEXPR const th_vsseg th_vsseg_obj;
+static CONSTEXPR const th_vlsseg th_vlsseg_obj;
+static CONSTEXPR const th_vssseg th_vssseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;
+static CONSTEXPR const th_vlsegff th_vlsegff_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+ of class <NAME>_obj. */
+#define BASE(NAME) \
+ namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vsetvl)
+BASE (th_vsetvlmax)
+BASE (th_vle)
+BASE (th_vse)
+BASE (th_vlm)
+BASE (th_vsm)
+BASE (th_vlse)
+BASE (th_vsse)
+BASE (th_vluxei8)
+BASE (th_vluxei16)
+BASE (th_vluxei32)
+BASE (th_vluxei64)
+BASE (th_vloxei8)
+BASE (th_vloxei16)
+BASE (th_vloxei32)
+BASE (th_vloxei64)
+BASE (th_vsuxei8)
+BASE (th_vsuxei16)
+BASE (th_vsuxei32)
+BASE (th_vsuxei64)
+BASE (th_vsoxei8)
+BASE (th_vsoxei16)
+BASE (th_vsoxei32)
+BASE (th_vsoxei64)
+BASE (th_vneg)
+BASE (th_vnot)
+BASE (th_vnsrl)
+BASE (th_vnsra)
+BASE (th_vncvt_x)
+BASE (th_vnclip)
+BASE (th_vnclipu)
+BASE (th_vcpop)
+BASE (th_vfirst)
+BASE (th_vmadc)
+BASE (th_vmsbc)
+BASE (th_vfncvt_x)
+BASE (th_vfncvt_x_frm)
+BASE (th_vfncvt_xu)
+BASE (th_vfncvt_xu_frm)
+BASE (th_vfncvt_f)
+BASE (th_vfncvt_f_frm)
+BASE (th_vfredusum)
+BASE (th_vfredusum_frm)
+BASE (th_vfredosum)
+BASE (th_vfredosum_frm)
+BASE (th_vfwredusum)
+BASE (th_vfwredusum_frm)
+BASE (th_vfwredosum)
+BASE (th_vfwredosum_frm)
+BASE (th_vleff)
+BASE (th_vlseg)
+BASE (th_vsseg)
+BASE (th_vlsseg)
+BASE (th_vssseg)
+BASE (th_vluxseg)
+BASE (th_vloxseg)
+BASE (th_vsuxseg)
+BASE (th_vsoxseg)
+BASE (th_vlsegff)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..d0bf00b8e81
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,92 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+ for GNU compiler.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+ Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+ Semiconductor Co., Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vsetvl;
+extern const function_base *const th_vsetvlmax;
+extern const function_base *const th_vle;
+extern const function_base *const th_vse;
+extern const function_base *const th_vlm;
+extern const function_base *const th_vsm;
+extern const function_base *const th_vlse;
+extern const function_base *const th_vsse;
+extern const function_base *const th_vluxei8;
+extern const function_base *const th_vluxei16;
+extern const function_base *const th_vluxei32;
+extern const function_base *const th_vluxei64;
+extern const function_base *const th_vloxei8;
+extern const function_base *const th_vloxei16;
+extern const function_base *const th_vloxei32;
+extern const function_base *const th_vloxei64;
+extern const function_base *const th_vsuxei8;
+extern const function_base *const th_vsuxei16;
+extern const function_base *const th_vsuxei32;
+extern const function_base *const th_vsuxei64;
+extern const function_base *const th_vsoxei8;
+extern const function_base *const th_vsoxei16;
+extern const function_base *const th_vsoxei32;
+extern const function_base *const th_vsoxei64;
+extern const function_base *const th_vneg;
+extern const function_base *const th_vnot;
+extern const function_base *const th_vnsrl;
+extern const function_base *const th_vnsra;
+extern const function_base *const th_vncvt_x;
+extern const function_base *const th_vnclip;
+extern const function_base *const th_vnclipu;
+extern const function_base *const th_vcpop;
+extern const function_base *const th_vfirst;
+extern const function_base *const th_vmadc;
+extern const function_base *const th_vmsbc;
+extern const function_base *const th_vfncvt_x;
+extern const function_base *const th_vfncvt_x_frm;
+extern const function_base *const th_vfncvt_xu;
+extern const function_base *const th_vfncvt_xu_frm;
+extern const function_base *const th_vfncvt_f;
+extern const function_base *const th_vfncvt_f_frm;
+extern const function_base *const th_vfredusum;
+extern const function_base *const th_vfredusum_frm;
+extern const function_base *const th_vfredosum;
+extern const function_base *const th_vfredosum_frm;
+extern const function_base *const th_vfwredusum;
+extern const function_base *const th_vfwredusum_frm;
+extern const function_base *const th_vfwredosum;
+extern const function_base *const th_vfwredosum_frm;
+extern const function_base *const th_vleff;
+extern const function_base *const th_vlseg;
+extern const function_base *const th_vsseg;
+extern const function_base *const th_vlsseg;
+extern const function_base *const th_vssseg;
+extern const function_base *const th_vluxseg;
+extern const function_base *const th_vloxseg;
+extern const function_base *const th_vsuxseg;
+extern const function_base *const th_vsoxseg;
+extern const function_base *const th_vlsegff;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..072fb5e68e1
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,2574 @@
+(define_c_enum "unspec" [
+ UNSPEC_TH_VWLDST
+])
+
+(define_int_attr th_order [
+ (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")
+])
+
+(define_int_attr th_reduc_op [
+ (UNSPEC_REDUC_SUM "redsum")
+ (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")
+ (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")
+ (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")
+ (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")
+ (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")
+])
+
+(define_code_iterator neg_unop [neg])
+(define_code_iterator not_unop [not])
+
+(define_code_iterator any_float_unop_neg [neg])
+(define_code_iterator any_float_unop_abs [abs])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+ [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+ "TARGET_XTHEADVECTOR"
+ [(const_int 0)]
+ {
+ emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+ RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+ DONE;
+ })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+ [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand" "=vr,vr, m")
+ (unspec:V_VLS_VT
+ [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+ (match_operand 2 "vector_length_operand" " rK, rK, rK")
+ (match_operand 3 "const_1_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.v\t%0,%1
+ vle.v\t%0,%1
+ vse.v\t%1,%0"
+ "&& REG_P (operands[0]) && REG_P (operands[1])
+ && REGNO (operands[0]) == REGNO (operands[1])"
+ [(const_int 0)]
+ ""
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")
+ (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+ (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+ (set (attr "avl_type_idx") (const_int 3))
+ (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+ [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr,vr, m")
+ (unspec:VB
+ [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+ (match_operand 2 "vector_length_operand" " rK, rK, rK")
+ (match_operand 3 "const_1_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.v\t%0,%1
+ vle.v\t%0,%1
+ vse.v\t%1,%0"
+ "&& REG_P (operands[0]) && REG_P (operands[1])
+ && REGNO (operands[0]) == REGNO (operands[1])"
+ [(const_int 0)]
+ ""
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")
+ (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+ (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+ (set (attr "avl_type_idx") (const_int 3))
+ (set_attr "vl_op_idx" "2")
+ (set (attr "sew") (const_int 8))
+ (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_expand "@pred_th_mov<mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V_VLS 3 "vector_move_operand")
+ (match_operand:V_VLS 2 "vector_merge_operand")))]
+ "TARGET_XTHEADVECTOR"
+ {})
+
+(define_insn_and_split "*pred_broadcast<mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vr, vr, vd, vd, vr, vr, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSI
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " r, r,Wdm,Wdm,Wdm,Wdm, r, r"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.x\t%0,%3
+ vmv.v.x\t%0,%3
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero
+ vlse.v\t%0,%3,zero
+ vmv.s.x\t%0,%3
+ vmv.s.x\t%0,%3"
+ "(register_operand (operands[3], <VEL>mode)
+ || CONST_POLY_INT_P (operands[3]))
+ && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
+ [(set (match_dup 0)
+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)
+ (match_dup 5) (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSI (match_dup 3))
+ (match_dup 2)))]
+ {
+ gcc_assert (can_create_pseudo_p ());
+ if (CONST_POLY_INT_P (operands[3]))
+ {
+ rtx tmp = gen_reg_rtx (<VEL>mode);
+ emit_move_insn (tmp, operands[3]);
+ operands[3] = tmp;
+ }
+ rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),
+ GET_MODE_ALIGNMENT (<VEL>mode));
+ m = validize_mem (m);
+ emit_move_insn (m, operands[3]);
+ m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));
+ operands[3] = m;
+
+ /* For SEW = 64 in RV32 system, we expand vmv.s.x:
+ andi a2,a2,1
+ vsetvl zero,a2,e64
+ vlse64.v */
+ if (satisfies_constraint_Wb1 (operands[1]))
+ {
+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);
+ operands[1] = CONSTM1_RTX (<VM>mode);
+ }
+ }
+ [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_broadcast<mode>"
+ [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand" "=vr, vr, vr, vr, vr, vr, vr, vr")
+ (if_then_else:V_VLSF_ZVFHMIN
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSF_ZVFHMIN
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " f, f,Wdm,Wdm,Wdm,Wdm, f, f"))
+ (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vfmv.v.f\t%0,%3
+ vfmv.v.f\t%0,%3
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero
+ vlse.v\t%0,%3,zero
+ vfmv.s.f\t%0,%3
+ vfmv.s.f\t%0,%3"
+ [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
+ (set_attr "mode" "<MODE>")])
+
+;; vle.v/vse.v,vmv.v.v
+(define_insn_and_split "*pred_th_mov<mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand" "=vr, vr, vd, m, vr, vr")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V_VLS 3 "reg_or_mem_operand" " m, m, m, vr, vr, vr")
+ (match_operand:V_VLS 2 "vector_merge_operand" " 0, vu, vu, vu, vu, 0")))]
+ "(TARGET_XTHEADVECTOR
+ && (register_operand (operands[0], <MODE>mode)
+ || register_operand (operands[3], <MODE>mode)))"
+ "@
+ vle.v\t%0,%3%p1
+ vle.v\t%0,%3
+ vle.v\t%0,%3,%1.t
+ vse.v\t%3,%0%p1
+ vmv.v.v\t%0,%3
+ vmv.v.v\t%0,%3"
+ "&& register_operand (operands[0], <MODE>mode)
+ && register_operand (operands[3], <MODE>mode)
+ && satisfies_constraint_vu (operands[2])
+ && INTVAL (operands[7]) == riscv_vector::VLMAX"
+ [(set (match_dup 0) (match_dup 3))]
+ ""
+ [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn_and_split "@pred_th_mov<mode>"
+ [(set (match_operand:VB_VLS 0 "nonimmediate_operand" "=vr, m, vr, vr, vr")
+ (if_then_else:VB_VLS
+ (unspec:VB_VLS
+ [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:VB_VLS 3 "vector_move_operand" " m, vr, vr, Wc0, Wc1")
+ (match_operand:VB_VLS 2 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ #
+ #
+ vmcpy.m\t%0,%3
+ vmclr.m\t%0
+ vmset.m\t%0"
+ "&& !reload_completed"
+ [(const_int 0)]
+ {
+ if ((MEM_P (operands[0]) || MEM_P (operands[3]))
+ || (REG_P (operands[0]) && REG_P (operands[3])
+ && INTVAL (operands[5]) == riscv_vector::VLMAX))
+ {
+ emit_move_insn (operands[0], operands[3]);
+ DONE;
+ }
+
+ FAIL;
+ }
+ [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_store<mode>"
+ [(set (match_operand:V 0 "memory_operand" "+m")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V 2 "register_operand" " vr")
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "vse.v\t%2,%0%p1"
+ [(set_attr "type" "vste")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 4))
+ (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_th_strided_load<mode>"
+ [(set (match_operand:V 0 "register_operand" "=vr, vr, vd, vr, vr, vd")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand:V 3 "memory_operand" " m, m, m, m, m, m")
+ (match_operand 4 "<V:stride_predicate>" "<V:stride_load_constraint>")] UNSPEC_STRIDED)
+ (match_operand:V 2 "vector_merge_operand" " 0, vu, vu, 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vlse.v\t%0,%3,%z4%p1
+ vlse.v\t%0,%3,%z4
+ vlse.v\t%0,%3,%z4,%1.t
+ vle.v\t%0,%3%p1
+ vle.v\t%0,%3
+ vle.v\t%0,%3,%1.t"
+ [(set_attr "type" "vlds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+ [(set (match_operand:V 0 "memory_operand" "+m, m")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand 2 "<V:stride_predicate>" "<V:stride_store_constraint>")
+ (match_operand:V 3 "register_operand" " vr, vr")] UNSPEC_STRIDED)
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vsse.v\t%3,%0,%z2%p1
+ vse.v\t%3,%0%p1"
+ [(set_attr "type" "vsts")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+
+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"
+ [(set (match_operand:V 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ,rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
+ (match_operand:V 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+;; DEST eew is greater than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"
+ [(set (match_operand:VEEWEXT2 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT2
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT2
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT2 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"
+ [(set (match_operand:VEEWEXT4 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT4
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT4
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT4 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"
+ [(set (match_operand:VEEWEXT8 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT8
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT8
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT8 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+;; DEST eew is smaller than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"
+ [(set (match_operand:VEEWTRUNC2 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC2
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC2
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC2 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"
+ [(set (match_operand:VEEWTRUNC4 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC4
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC4
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC4 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"
+ [(set (match_operand:VEEWTRUNC8 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC8
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC8
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC8 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:RATIO64 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO64:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:RATIO32 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO32:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:RATIO16 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO16:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:RATIO8 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO8:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:RATIO4 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO4:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:RATIO2 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO2:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO1 2 "register_operand" " vr")
+ (match_operand:RATIO1 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO1:MODE>")])
+
+(define_insn "@pred_th_popcount<VB:mode><P:mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (popcount:P
+ (unspec:VB
+ [(and:VB
+ (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+ (match_operand:VB 2 "register_operand" " vr"))
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
+ "TARGET_XTHEADVECTOR"
+ "vmpopc.m\t%0,%2%p1"
+ [(set_attr "type" "vmpop")
+ (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_ffs<VB:mode><P:mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (plus:P
+ (ffs:P
+ (unspec:VB
+ [(and:VB
+ (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+ (match_operand:VB 2 "register_operand" " vr"))
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
+ (const_int -1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmfirst.m\t%0,%2%p1"
+ [(set_attr "type" "vmffs")
+ (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VNCONVERT>
+ [(match_operand:V_VLSF 3 "register_operand" " vd, vd, vr, vr, vr, vr")] VFCVTS)
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.x<v_su>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftoi")
+ (set_attr "mode" "<VNCONVERT>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<float_cvt><mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float:<VNCONVERT>
+ (match_operand:VWCONVERTI 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.f.x<u>.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtitof")
+ (set_attr "mode" "<VNCONVERT>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<optab><mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd,&vd, &vr, &vr,&vd, &vr, &vr, &vr, vd, vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK,rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (any_shiftrt:VWEXTI
+ (match_operand:VWEXTI 3 "register_operand" " vr,vr, vr, vr, vd, vr, vr, vr, vd, vr, vr, vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " vd, vd, vr, vr,vr, vr, vr, vr, vk, vk, vk, vk")))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vd,vu, vr, vu,vu, vu, vu, vr, vu, vu, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vn<insn>.v%o4\t%0,%3,%v4%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_<optab><mode>_scalar"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (any_shiftrt:VWEXTI
+ (match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr")
+ (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vn<insn>.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnsrl.vx\t%0,%3,x0%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (float_truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTF_ZVFHMIN 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.f.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftof")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_fault_load<mode>"
+ [(set (match_operand:V 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand:V 3 "memory_operand" " m, m, m, m")] UNSPEC_VLEFF)
+ (match_operand:V 2 "vector_merge_operand" " vu, 0, vu, 0")))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI
+ [(if_then_else:V
+ (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V [(match_dup 3)] UNSPEC_VLEFF)
+ (match_dup 2))] UNSPEC_MODIFY_VL))]
+ "TARGET_XTHEADVECTOR"
+ "vleff.v\t%0,%3%p1"
+ [(set_attr "type" "vldff")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "vlseg<nf>e.v\t%0,(%z3)%p1"
+ [(set_attr "type" "vlsegde")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 2 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vsseg<nf>e.v\t%2,(%z1)%p0"
+ [(set_attr "type" "vssegte")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (match_operand 4 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_STRIDED)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"
+ [(set_attr "type" "vlsegds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand 2 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 3 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vssseg<nf>e.v\t%3,(%z1),%z2%p0"
+ [(set_attr "type" "vssegts")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_fault_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_VLEFF)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI
+ [(if_then_else:VT
+ (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
+ (match_dup 2))] UNSPEC_MODIFY_VL))]
+ "TARGET_XTHEADVECTOR"
+ "vlseg<nf>eff.v\t%0,(%z3)%p1"
+ [(set_attr "type" "vlsegdff")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"
+ [(set (match_operand:V1T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V1T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V1T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO64I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V1T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"
+ [(set (match_operand:V2T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V2T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V2T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO32I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V2T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"
+ [(set (match_operand:V4T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V4T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V4T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO16I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V4T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"
+ [(set (match_operand:V8T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V8T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V8T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO8I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V8T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"
+ [(set (match_operand:V16T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V16T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V16T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO4I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V16T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"
+ [(set (match_operand:V32T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V32T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V32T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO2I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V32T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:V1T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:V2T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:V4T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:V8T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:V16T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:V32T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop_neg:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vfsgnjn.vv\t%0,%3,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop_abs:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vfsgnjx.vv\t%0,%3,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (not_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vnot.v\t%0,%3%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vrsub.vx\t%0,%3,x0%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vf<insn>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd,&vd, &vr, &vr,&vd, &vr, &vr, &vr, &vd, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK,rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(match_operand:VWEXTI 3 "register_operand" " vr,vr, vr, vr, vd, vr, vr, vr, vd, vr, vr, vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " vd, vd, vr, vr,vr, vr, vr, vr, vk, vk, vk, vk")] VNCLIP)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vd,vu, vr, vu,vu, vu, vu, vr, vu, vu, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"
+ [(set_attr "type" "vnclip")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr")
+ (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")] VNCLIP)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnclip<v_su>.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vnclip")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; Float Reduction Sum (vfred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+ [(set (match_operand:<V_LMUL1> 0 "register_operand" "=vr,vr")
+ (unspec:<V_LMUL1>
+ [(unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_LMUL1> [
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr")
+ (match_operand:<V_LMUL1> 4 "register_operand" " vr, vr")
+ ] ANY_FREDUC_SUM)
+ (match_operand:<V_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
+ "TARGET_XTHEADVECTOR"
+ "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+ [(set_attr "type" "vfred<order>")
+ (set_attr "mode" "<MODE>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+ [(set (match_operand:<V_EXT_LMUL1> 0 "register_operand" "=&vr, &vr")
+ (unspec:<V_EXT_LMUL1>
+ [(unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_EXT_LMUL1> [
+ (match_operand:VF_HS 3 "register_operand" " vr, vr")
+ (match_operand:<V_EXT_LMUL1> 4 "register_operand" " vr0, vr0")
+ ] ANY_FWREDUC_SUM)
+ (match_operand:<V_EXT_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
+ "TARGET_XTHEADVECTOR"
+ "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwred<order>")
+ (set_attr "mode" "<MODE>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_madc<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr")
+ (unspec:<VM>
+ [(plus:VI
+ (match_operand:VI 1 "register_operand" " %vr, vr, vr")
+ (match_operand:VI 2 "vector_arith_operand" "vrvi, vr, vi"))
+ (match_operand:<VM> 3 "register_operand" " vm, vm, vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.v%o2m\t%0,%1,%v2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI
+ (match_operand:VI 1 "register_operand" " vr")
+ (match_operand:VI 2 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vvm\t%0,%1,%2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "register_operand" " r"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (match_operand:<VM> 3 "register_operand")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4], operands[5]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+ DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_madc<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (match_operand:<VM> 3 "register_operand")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4], operands[5]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+ DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_msbc<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_overflow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr")
+ (unspec:<VM>
+ [(plus:VI
+ (match_operand:VI 1 "register_operand" " %vr, vr, vr")
+ (match_operand:VI 2 "vector_arith_operand" "vrvi, vr, vi"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK, rK, rK")
+ (match_operand 4 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.v%o2\t%0,%1,%v2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI
+ (match_operand:VI 1 "register_operand" " vr")
+ (match_operand:VI 2 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vv\t%0,%1,%2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand")
+ (match_operand 4 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+ DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand")
+ (match_operand 4 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+ DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*th_vsetvl<mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")
+ (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+ (match_dup 2)
+ (match_dup 3)] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+ (match_dup 3)
+ (match_dup 4)
+ (match_dup 5)] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\t%0,%1,e%2,%m3"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "<MODE>")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "*th_vsetvl_vtype_change_only"
+ [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+ [(match_operand 0 "const_int_operand" "i")
+ (match_operand 1 "const_int_operand" "i")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\tzero,zero,e%0,%m1"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "SI")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "*th_vsetvl_discard_result<mode>"
+ [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+ (match_operand 1 "const_int_operand" "i")
+ (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+ (match_dup 2)
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\tzero,%0,e%1,%m2"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "<MODE>")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")
+ (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "#"
+ "&& epilogue_completed"
+ [(parallel
+ [(set (match_dup 0)
+ (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)
+ (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)
+ (match_dup 5)] UNSPEC_VSETVL))])]
+ ""
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "SI")])
+
+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 3 "register_operand" " vr")
+ (match_operand:V_VLSI 4 "vector_arith_operand" "vrvi")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_arith_operand" " vr, vr, vi, vi")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_arith_operand" " vrvi, vrvi, vr, vr, vrvi, vr, vr, vrvi, vrvi")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "ltge_operator"
+ [(match_operand:V_VLSI 3 "register_operand" " vr")
+ (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_ltge<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_neg_arith_operand" " vr, vr, vj, vj")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_ltge<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_neg_arith_operand" " vrvj, vrvj, vr, vr, vrvj, vr, vr, vrvj, vrvj")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r"))
+ (match_operand:V_VLSI_QHS 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r"))
+ (match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))
+ (match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 4 "register_operand" " r"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 4 "register_operand" " r"))
+ (match_operand:V_VLSI_D 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r"))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "register_operand" " r")))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r")))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "register_operand" " r")))
+ (match_operand:V_VLSI_D 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r")))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr")
+ (match_operand:V_VLSF 5 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "signed_order_operator"
+ [(match_operand:V_VLSF 3 "register_operand" " vr")
+ (match_operand:V_VLSF 4 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vv\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSF 5 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "signed_order_operator"
+ [(match_operand:V_VLSF 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 4 "register_operand" " f"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vf\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 4 "register_operand" " f"))
+ (match_operand:V_VLSF 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vf\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f"))
+ (match_operand:V_VLSF 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))
+ (match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [
 ])
 (define_mode_iterator VI [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@ (define_mode_iterator VI [
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@ (define_mode_iterator VF [
 (define_mode_iterator VF_ZVFHMIN [
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [
 ])
 (define_mode_iterator VEEWEXT2 [
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [
 ])
 (define_mode_iterator VEEWEXT4 [
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [
 ])
 (define_mode_iterator VEEWTRUNC2 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 (RVVM4SI "TARGET_64BIT")
 (RVVM2SI "TARGET_64BIT")
 (RVVM1SI "TARGET_64BIT")
- (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 (define_mode_iterator VEEWTRUNC4 [
- RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM2HI "TARGET_64BIT")
 (RVVM1HI "TARGET_64BIT")
- (RVVMF2HI "TARGET_64BIT")
- (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
 (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 (define_mode_iterator VEEWTRUNC8 [
 (RVVM1QI "TARGET_64BIT")
- (RVVMF2QI "TARGET_64BIT")
- (RVVMF4QI "TARGET_64BIT")
- (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 (define_mode_iterator VEI16 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [
 ])
 (define_mode_iterator VFULLI [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [
 ])
 (define_mode_iterator VI_QH [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 (define_mode_iterator VI_QHS [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [
 ])
 (define_mode_iterator VI_QHS_NO_M8 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [
 (define_mode_iterator VF_HS [
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
 (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [
 (RVVM4HF "TARGET_ZVFH")
 (RVVM2HF "TARGET_ZVFH")
 (RVVM1HF "TARGET_ZVFH")
- (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
 (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [
 ])
 (define_mode_iterator V_VLSI_QHS [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
- (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM1DI "TARGET_VECTOR_ELEN_64")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 (define_mode_iterator RATIO32 [
- RVVMF4QI
- RVVMF2HI
+ (RVVMF4QI "!TARGET_XTHEADVECTOR")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR")
 RVVM1SI
 (RVVM2DI "TARGET_VECTOR_ELEN_64")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 (define_mode_iterator RATIO16 [
- RVVMF2QI
+ (RVVMF2QI "!TARGET_XTHEADVECTOR")
 RVVM1HI
 RVVM2SI
 (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [
 ])
 (define_mode_iterator RATIO64I [
- (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 (define_mode_iterator RATIO32I [
- RVVMF4QI
- RVVMF2HI
+ (RVVMF4QI "!TARGET_XTHEADVECTOR")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR")
 RVVM1SI
 (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 (define_mode_iterator RATIO16I [
- RVVMF2QI
+ (RVVMF2QI "!TARGET_XTHEADVECTOR")
 RVVM1HI
 RVVM2SI
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [
 ])
 (define_mode_iterator V_FRACT [
- RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 (define_mode_iterator VWEXTI [
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [
 (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [
 (define_mode_iterator VWCONVERTI [
 (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
- (RVVMF2SI "TARGET_ZVFH")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [
 ])
 (define_mode_iterator VQEXTI [
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 (define_mode_iterator VINDEXED [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [
 (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
 (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [
 (define_mode_iterator V_VLS_F_CONVERT_SI [
 (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
- (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [
 ])
 (define_mode_iterator V_VLS_F_CONVERT_DI [
- (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
- (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+ (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@ (define_attr "sew" ""
 V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 (const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 16)
+ (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 32)
+ (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 64)
+ (const_int 8))
 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@ (define_attr "vlmul" ""
 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@ (define_attr "ratio" ""
 vislide1up,vislide1down,vfslide1up,vfslide1down,\
 vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+ vlsegdff,vssegtux,vlsegdox,vlsegdux")
+ (match_test "TARGET_XTHEADVECTOR"))
+ (const_int INVALID_ATTRIBUTE)
 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""
 (symbol_ref "riscv_vector::FRM_DYN")]
 (symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"
 (define_insn "*mov<mode>_whole"
 [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 (match_operand:V_WHOLE 1 "reg_or_mem_operand" " m,vr,vr"))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "@
 vl%m1re<sew>.v\t%0,%1
 vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"
 (define_insn "*mov<mode>"
 [(set (match_operand:VB 0 "register_operand" "=vr")
 (match_operand:VB 1 "register_operand" " vr"))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "vmv1r.v\t%0,%1"
 [(set_attr "type" "vmov")
 (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"
 (any_extend:VWEXTI
 (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
 (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf2\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"
 (any_extend:VQEXTI
 (match_operand:<V_QUAD_TRUNC> 3 "register_operand" "W43,W43,W43,W43,W86,W86,W86,W86, vr, vr"))
 (match_operand:VQEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf4\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"
 (any_extend:VOEXTI
 (match_operand:<V_OCT_TRUNC> 3 "register_operand" "W87,W87,W87,W87, vr, vr"))
 (match_operand:VOEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf8\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
 }]
 }
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+ return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+ #ifndef __riscv_xtheadvector
+ #error "Not __riscv_xtheadvector"
+ #endif
+ }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 14:24       ` 回复:[PATCH " joshua
@ 2023-12-20 14:27         ` 钟居哲
  2023-12-20 14:41           ` 回复:回复:[PATCH " joshua
  0 siblings, 1 reply; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 14:27 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu


Why do you add this ?

+(define_insn "@pred_th_<optab><mode>"
+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")
+	(if_then_else:V_VLSF
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+	     (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")
+	     (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")
+	     (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)
+	     (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+	  (any_float_unop:V_VLSF
+	    (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))
+	  (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vf<insn>.v\t%0,%3%p1"
+  [(set_attr "type" "<float_insn_type>")
+   (set_attr "mode" "<MODE>")
+   (set_attr "vl_op_idx" "4")
+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+   (set (attr "avl_type_idx") (const_int 7))
+   (set (attr "frm_mode")
+	(symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])


Theadvector is not th.vfneg.v ?




juzhe.zhong@rivai.ai



 



发件人: joshua



发送时间: 2023-12-20 22:24



收件人: 钟居哲; gcc-patches



抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu



主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector



Hi Juzhe,







The patterns you supposed redundant are all necessary, because they generate different instructions from vector.



Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions, 



and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.







Joshua































------------------------------------------------------------------



发件人:钟居哲 <juzhe.zhong@rivai.ai>



发送时间:2023年12月20日(星期三) 22:00



收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>



抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>



主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector







+// 7.6. Vector Indexed Instructions



+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)







Why do you add these ?







+(define_insn "@pred_th_unit_strided_store<mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+       (match_operand 3 "vector_length_operand"    "   rK")



+       (match_operand 4 "const_int_operand"        "    i")



+       (reg:SI VL_REGNUM)



+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")



+    (match_operand:VT 2 "register_operand"         "   vr")



+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]



+  "TARGET_XTHEADVECTOR"



+  "vsseg<nf>e.v\t%2,(%z1)%p0"



+  [(set_attr "type" "vssegte")



+   (set_attr "mode" "<MODE>")])







These patterns are redundant just names are different.



They should be removed.



juzhe.zhong@rivai.ai



 



From: Jun Sha (Joshua)



Date: 2023-12-20 20:34



To: gcc-patches



CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu



Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector



This patch is to handle the differences in instruction generation



between Vector and XTheadVector, adding th. prefix



to all XTheadVector instructions is not included.



 



For some vector patterns that cannot be avoided, we use



!TARGET_XTHEADVECTOR to disable them in vector.md in order



not to generate instructions that xtheadvector does not support,



like vmv1r and vsext.vf2.



 



gcc/ChangeLog:



 



* config.gcc:  Add files for XTheadVector intrinsics.



* config/riscv/autovec.md: Guard XTheadVector.



* config/riscv/riscv-string.cc (expand_block_move):



Guard XTheadVector.



* config/riscv/riscv-v.cc (legitimize_move):



New expansion.



(get_prefer_tail_policy): Give specific value for tail.



(get_prefer_mask_policy): Give specific value for mask.



(vls_mode_valid_p): Avoid autovec.



* config/riscv/riscv-vector-builtins-shapes.cc (check_type):



(build_one): New function.



* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):



(DEF_THEAD_RVV_FUNCTION): Add new marcos.



(check_required_extensions):



(handle_pragma_vector):



* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):



(RVV_REQUIRE_XTHEADVECTOR):



Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.



(struct function_group_info):



* config/riscv/riscv-vector-switch.def (ENTRY):



Disable fractional mode for the XTheadVector extension.



(TUPLE_ENTRY): Likewise.



* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.



* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):



Guard XTheadVector.



(riscv_v_adjust_bytesize): Likewise.



(riscv_preferred_simd_mode): Likewsie.



(riscv_autovectorize_vector_modes): Likewise.



(riscv_vector_mode_supported_any_target_p): Likewise.



(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.



* config/riscv/t-riscv: Add new files.



* config/riscv/vector-iterators.md: Remove fractional LMUL.



* config/riscv/vector.md: Include thead-vector.md.



* config/riscv/riscv_th_vector.h: New file.



* config/riscv/thead-vector-builtins-functions.def: New file.



* config/riscv/thead-vector-builtins.cc: New file.



* config/riscv/thead-vector-builtins.h: New file.



* config/riscv/thead-vector.md: New file.



 



gcc/testsuite/ChangeLog:



 



* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.



* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.



* lib/target-supports.exp: Add target for XTheadVector.



 



Co-authored-by: Jin Ma <jinma@linux.alibaba.com>



Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>



Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>



---



gcc/config.gcc                                |    4 +-



gcc/config/riscv/autovec.md                   |    2 +-



gcc/config/riscv/predicates.md                |    8 +-



gcc/config/riscv/riscv-string.cc              |    3 +



gcc/config/riscv/riscv-v.cc                   |   13 +-



.../riscv/riscv-vector-builtins-shapes.cc     |   23 +



gcc/config/riscv/riscv-vector-builtins.cc     |    7 +



gcc/config/riscv/riscv-vector-builtins.h      |    5 +-



gcc/config/riscv/riscv-vector-switch.def      |  150 +-



gcc/config/riscv/riscv.cc                     |   20 +-



gcc/config/riscv/riscv_th_vector.h            |   49 +



gcc/config/riscv/t-riscv                      |   16 +



.../riscv/thead-vector-builtins-functions.def |  627 ++++



gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++



gcc/config/riscv/thead-vector-builtins.h      |   92 +



gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++



gcc/config/riscv/vector-iterators.md          |  186 +-



gcc/config/riscv/vector.md                    |   36 +-



.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-



.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-



gcc/testsuite/lib/target-supports.exp         |   12 +



21 files changed, 4386 insertions(+), 191 deletions(-)



create mode 100644 gcc/config/riscv/riscv_th_vector.h



create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def



create mode 100644 gcc/config/riscv/thead-vector-builtins.cc



create mode 100644 gcc/config/riscv/thead-vector-builtins.h



create mode 100644 gcc/config/riscv/thead-vector.md



 



diff --git a/gcc/config.gcc b/gcc/config.gcc



index f0676c830e8..4478395ab77 100644



--- a/gcc/config.gcc



+++ b/gcc/config.gcc



@@ -547,9 +547,9 @@ riscv*)



extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"



extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"



extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"



- extra_objs="${extra_objs} thead.o riscv-target-attr.o"



+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"



d_target_objs="riscv-d.o"



- extra_headers="riscv_vector.h"



+ extra_headers="riscv_vector.h riscv_th_vector.h"



target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"



target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"



;;



diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md



index 8b8a92f10a1..1fac56c7095 100644



--- a/gcc/config/riscv/autovec.md



+++ b/gcc/config/riscv/autovec.md



@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"



   [(match_operand      0 "register_operand")



    (match_operand      1 "memory_operand")



    (match_operand:ANYI 2 "const_int_operand")]



-  "TARGET_VECTOR"



+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"



   {



     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],



   operands[2]);



diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md



index 1a3a4f1ecbb..d910367e59c 100644



--- a/gcc/config/riscv/predicates.md



+++ b/gcc/config/riscv/predicates.md



@@ -64,8 +64,9 @@ (define_predicate "csr_operand"



        (match_operand 0 "register_operand")))



(define_predicate "vector_csr_operand"



-  (ior (match_operand 0 "const_csr_operand")



-       (match_operand 0 "register_operand")))



+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")



+      (match_operand 0 "const_csr_operand"))



+    (match_operand 0 "register_operand")))



;; V has 32-bit unsigned immediates.  This happens to be the same constraint as



;; the csr_operand, but it's not CSR related.



@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"



;; Predicates for the V extension.



(define_special_predicate "vector_length_operand"



   (ior (match_operand 0 "pmode_register_operand")



-       (match_operand 0 "const_csr_operand")))



+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")



+    (match_operand 0 "const_csr_operand"))))



(define_special_predicate "autovec_length_operand"



   (ior (match_operand 0 "pmode_register_operand")



diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc



index 11c1f74d0b3..ec8f3486fd8 100644



--- a/gcc/config/riscv/riscv-string.cc



+++ b/gcc/config/riscv/riscv-string.cc



@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)



bnez a2, loop                   # Any more?



ret                             # Return



   */



+   if (TARGET_XTHEADVECTOR)



+    return false;



+



   gcc_assert (TARGET_VECTOR);



   HOST_WIDE_INT potential_ew



diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc



index 486f5deb296..710332e17db 100644



--- a/gcc/config/riscv/riscv-v.cc



+++ b/gcc/config/riscv/riscv-v.cc



@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)



       return true;



     }



+  if (TARGET_XTHEADVECTOR)



+      {



+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,



+   RVV_VLMAX, GEN_INT(VLMAX)));



+ return true;



+      }



+



   if (riscv_v_ext_vls_mode_p (mode))



     {



       if (GET_MODE_NUNITS (mode).to_constant () <= 31)



@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()



      compiler pick up either agnostic or undisturbed. Maybe we



      will have a compile option like -mprefer=agnostic to set



      this value???.  */



-  return TAIL_ANY;



+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;



}



/* Get prefer mask policy.  */



@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()



      compiler pick up either agnostic or undisturbed. Maybe we



      will have a compile option like -mprefer=agnostic to set



      this value???.  */



-  return MASK_ANY;



+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;



}



/* Get avl_type rtx.  */



@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)



bool



vls_mode_valid_p (machine_mode vls_mode)



{



-  if (!TARGET_VECTOR)



+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)



     return false;



   if (riscv_autovec_preference == RVV_SCALABLE)



diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc



index 4a754e0228f..6b49404a1fa 100644



--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc



+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc



@@ -33,6 +33,25 @@



namespace riscv_vector {



+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are



+   valid for the function.  */



+



+static bool



+check_type (tree return_type, vec<tree> &argument_types)



+{



+  tree arg;



+  unsigned i;



+



+  if (!return_type)



+    return false;



+



+  FOR_EACH_VEC_ELT (argument_types, i, arg)



+    if (!arg)



+      return false;



+



+  return true;



+}



+



/* Add one function instance for GROUP, using operand suffix at index OI,



    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */



static void



@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,



     group.ops_infos.types[vec_type_idx].index);



   b.allocate_argument_types (function_instance, argument_types);



   b.apply_predication (function_instance, return_type, argument_types);



+



+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))



+    return;



+



   b.add_overloaded_function (function_instance, *group.shape);



   b.add_unique_function (function_instance, (*group.shape), return_type,



argument_types);



diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc



index 4e2c66c2de7..f5f9000d89c 100644



--- a/gcc/config/riscv/riscv-vector-builtins.cc



+++ b/gcc/config/riscv/riscv-vector-builtins.cc



@@ -51,6 +51,7 @@



#include "riscv-vector-builtins.h"



#include "riscv-vector-builtins-shapes.h"



#include "riscv-vector-builtins-bases.h"



+#include "thead-vector-builtins.h"



using namespace riscv_vector;



@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {



#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \



   {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},



#include "riscv-vector-builtins-functions.def"



+#undef DEF_RVV_FUNCTION



+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \



+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},



+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \



+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},



+#include "thead-vector-builtins-functions.def"



};



/* The RVV types, with their built-in



diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h



index 4f38c09d73d..bb463510dd2 100644



--- a/gcc/config/riscv/riscv-vector-builtins.h



+++ b/gcc/config/riscv/riscv-vector-builtins.h



@@ -123,6 +123,7 @@ enum required_ext



   ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */



   ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */



   ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */



+  XTHEADVECTOR_EXT,   /* XTheadVector extension */



};



/* Enumerates the RVV operand types.  */



@@ -233,7 +234,7 @@ struct function_group_info



     switch (ext_value)



     {



       case VECTOR_EXT:



-        return TARGET_VECTOR;



+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);



       case ZVBB_EXT:



         return TARGET_ZVBB;



       case ZVBB_OR_ZVKB_EXT:



@@ -252,6 +253,8 @@ struct function_group_info



         return TARGET_ZVKSED;



       case ZVKSH_EXT:



         return TARGET_ZVKSH;



+      case XTHEADVECTOR_EXT:



+ return TARGET_XTHEADVECTOR;



       default:



         gcc_unreachable ();



     }



diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def



index 5c9f9bcbc3e..f7a66b34bae 100644



--- a/gcc/config/riscv/riscv-vector-switch.def



+++ b/gcc/config/riscv/riscv-vector-switch.def



@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.



#endif



/* Disable modes if TARGET_MIN_VLEN == 32.  */



-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)



-ENTRY (RVVMF32BI, true, LMUL_F4, 32)



-ENTRY (RVVMF16BI, true, LMUL_F2, 16)



+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)



+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)



+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)



ENTRY (RVVMF8BI, true, LMUL_1, 8)



ENTRY (RVVMF4BI, true, LMUL_2, 4)



ENTRY (RVVMF2BI, true, LMUL_4, 2)



@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)



ENTRY (RVVM4QI, true, LMUL_4, 2)



ENTRY (RVVM2QI, true, LMUL_2, 4)



ENTRY (RVVM1QI, true, LMUL_1, 8)



-ENTRY (RVVMF2QI, true, LMUL_F2, 16)



-ENTRY (RVVMF4QI, true, LMUL_F4, 32)



-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)



+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)



+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)



+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)



/* Disable modes if TARGET_MIN_VLEN == 32.  */



ENTRY (RVVM8HI, true, LMUL_8, 2)



ENTRY (RVVM4HI, true, LMUL_4, 4)



ENTRY (RVVM2HI, true, LMUL_2, 8)



ENTRY (RVVM1HI, true, LMUL_1, 16)



-ENTRY (RVVMF2HI, true, LMUL_F2, 32)



-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)



+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)



+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)



/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */



ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)



ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)



ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)



ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)



-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)



-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)



+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)



+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)



/* Disable modes if TARGET_MIN_VLEN == 32.  */



ENTRY (RVVM8SI, true, LMUL_8, 4)



ENTRY (RVVM4SI, true, LMUL_4, 8)



ENTRY (RVVM2SI, true, LMUL_2, 16)



ENTRY (RVVM1SI, true, LMUL_1, 32)



-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)



+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)



/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */



ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)



ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)



ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)



ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)



-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)



+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)



/* Disable modes if !TARGET_VECTOR_ELEN_64.  */



ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)



@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)



#endif



TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)



TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)



TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)



TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)



TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)



TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)



TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)



TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)



TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)



TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)



TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)



-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)



-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)



-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)



+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)



+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)



+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)



TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)



TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)



TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)



TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)



TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)



TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)



TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)



TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)



TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)



-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)



+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)



TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)



TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)



TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)



TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)



TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)



TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)



TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)



TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)



TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)



TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)



-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)



+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)



TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)



TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)



diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc



index d3010bed8d8..18cc64b63e6 100644



--- a/gcc/config/riscv/riscv.cc



+++ b/gcc/config/riscv/riscv.cc



@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)



{



   if (riscv_v_ext_vector_mode_p (mode))



     {



+      if (TARGET_XTHEADVECTOR)



+ return BYTES_PER_RISCV_VECTOR;



+



       poly_int64 nunits = GET_MODE_NUNITS (mode);



       poly_int64 mode_size = GET_MODE_SIZE (mode);



@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)



static machine_mode



riscv_preferred_simd_mode (scalar_mode mode)



{



-  if (TARGET_VECTOR)



+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)



     return riscv_vector::preferred_simd_mode (mode);



   return word_mode;



@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)



unsigned int



riscv_autovectorize_vector_modes (vector_modes *modes, bool all)



{



-  if (TARGET_VECTOR)



+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)



     return riscv_vector::autovectorize_vector_modes (modes, all);



   return default_autovectorize_vector_modes (modes, all);



@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)



   return false;



}



+/* Implements target hook vector_mode_supported_any_target_p.  */



+



+static bool



+riscv_vector_mode_supported_any_target_p (machine_mode mode)



+{



+  if (TARGET_XTHEADVECTOR)



+    return false;



+  return true;



+}



+



/* Initialize the GCC target structure.  */



#undef TARGET_ASM_ALIGNED_HI_OP



#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"



@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)



#undef TARGET_PREFERRED_ELSE_VALUE



#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value



+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P



+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p



+



struct gcc_target targetm = TARGET_INITIALIZER;



#include "gt-riscv.h"



diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h



new file mode 100644



index 00000000000..6f47e0c90a4



--- /dev/null



+++ b/gcc/config/riscv/riscv_th_vector.h



@@ -0,0 +1,49 @@



+/* RISC-V 'XTheadVector' Extension intrinsics include file.



+   Copyright (C) 2022-2023 Free Software Foundation, Inc.



+



+   This file is part of GCC.



+



+   GCC is free software; you can redistribute it and/or modify it



+   under the terms of the GNU General Public License as published



+   by the Free Software Foundation; either version 3, or (at your



+   option) any later version.



+



+   GCC is distributed in the hope that it will be useful, but WITHOUT



+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY



+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public



+   License for more details.



+



+   Under Section 7 of GPL version 3, you are granted additional



+   permissions described in the GCC Runtime Library Exception, version



+   3.1, as published by the Free Software Foundation.



+



+   You should have received a copy of the GNU General Public License and



+   a copy of the GCC Runtime Library Exception along with this program;



+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see



+   <http://www.gnu.org/licenses/>.  */



+



+#ifndef __RISCV_TH_VECTOR_H



+#define __RISCV_TH_VECTOR_H



+



+#include <stdint.h>



+#include <stddef.h>



+



+#ifndef __riscv_xtheadvector



+#error "XTheadVector intrinsics require the xtheadvector extension."



+#else



+#ifdef __cplusplus



+extern "C" {



+#endif



+



+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does



+   not define the RVV types and intrinsic functions directly in C and C++



+   code, but instead uses the following pragma to tell GCC to insert the



+   necessary type and function definitions itself.  The net effect is the



+   same, and the file is a complete implementation of riscv_th_vector.h.  */



+#pragma riscv intrinsic "vector"



+



+#ifdef __cplusplus



+}



+#endif // __cplusplus



+#endif // __riscv_xtheadvector



+#endif // __RISCV_TH_ECTOR_H



diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv



index 067771e3c97..09512092056 100644



--- a/gcc/config/riscv/t-riscv



+++ b/gcc/config/riscv/t-riscv



@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \



   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \



   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \



   $(srcdir)/config/riscv/riscv-vector-builtins-types.def \



+  $(srcdir)/config/riscv/thead-vector-builtins.h \



+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \



   $(RISCV_BUILTINS_H)



$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \



$(srcdir)/config/riscv/riscv-vector-builtins.cc



@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \



$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \



$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc



+thead-vector-builtins.o: \



+  $(srcdir)/config/riscv/thead-vector-builtins.cc \



+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \



+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \



+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \



+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \



+  rtx-vector-builder.h \



+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \



+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \



+  $(srcdir)/config/riscv/thead-vector-builtins.h \



+  $(RISCV_BUILTINS_H)



+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \



+ $(srcdir)/config/riscv/thead-vector-builtins.cc



+



riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \



   $(SYSTEM_H) $(TM_H)



$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \



diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def



new file mode 100644



index 00000000000..a85ca24cb31



--- /dev/null



+++ b/gcc/config/riscv/thead-vector-builtins-functions.def



@@ -0,0 +1,627 @@



+#ifndef DEF_RVV_FUNCTION



+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)



+#endif



+



+#ifndef DEF_THEAD_RVV_FUNCTION



+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)



+#endif



+



+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT



+/* Internal helper functions for gimple fold use.  */



+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)



+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)



+



+/* 6. Configuration-Setting Instructions.  */



+



+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)



+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)



+



+/* 7. Vector Loads and Stores. */



+



+// 7.4. Vector Unit-Stride Instructions



+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)



+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)



+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)



+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)



+



+// 7.5. Vector Strided Instructions



+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)



+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)



+



+// 7.6. Vector Indexed Instructions



+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)



+



+// 7.7. Unit-stride Fault-Only-First Loads



+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)



+



+// TODO: 7.8. Vector Load/Store Segment Instructions



+



+/* 11. Vector Integer Arithmetic Instructions.  */



+



+// 11.1. Vector Single-Width Integer Add and Subtract



+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)



+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)



+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)



+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)



+



+// 11.2. Vector Widening Integer Add/Subtract



+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)



+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)



+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)



+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)



+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)



+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)



+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)



+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)



+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)



+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)



+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)



+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)



+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)



+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)



+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)



+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)



+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)



+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)



+



+// 11.3. Vector Integer Extension



+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)



+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)



+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)



+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)



+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)



+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)



+



+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions



+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)



+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)



+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)



+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)



+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)



+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)



+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)



+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)



+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)



+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)



+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)



+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)



+



+// 11.5. Vector Bitwise Logical Instructions



+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)



+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)



+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)



+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)



+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)



+



+// 11.6. Vector Single-Width Shift Instructions



+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)



+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)



+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)



+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)



+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)



+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)



+



+// 11.7. Vector Narrowing Integer Right Shift Instructions



+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)



+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)



+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)



+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)



+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)



+



+// 11.8. Vector Integer Compare Instructions



+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)



+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)



+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)



+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)



+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)



+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)



+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)



+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)



+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)



+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)



+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)



+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)



+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)



+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)



+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)



+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)



+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)



+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)



+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)



+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)



+



+// 11.9. Vector Integer Min/Max Instructions



+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)



+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)



+



+// 11.10. Vector Single-Width Integer Multiply Instructions



+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)



+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)



+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)



+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)



+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)



+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)



+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)



+



+// 11.11. Vector Integer Divide Instructions



+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)



+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)



+



+// 11.12. Vector Widening Integer Multiply Instructions



+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)



+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)



+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)



+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)



+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)



+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)



+



+// 11.13. Vector Single-Width Integer Multiply-Add Instructions



+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)



+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)



+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)



+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)



+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)



+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)



+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)



+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)



+



+// 11.14. Vector Widening Integer Multiply-Add Instructions



+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)



+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)



+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)



+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)



+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)



+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)



+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)



+



+// 11.15. Vector Integer Merge Instructions



+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)



+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)



+



+// 11.16 Vector Integer Move Instructions



+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)



+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)



+



+/* 12. Vector Fixed-Point Arithmetic Instructions. */



+



+// 12.1. Vector Single-Width Saturating Add and Subtract



+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)



+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)



+



+// 12.2. Vector Single-Width Averaging Add and Subtract



+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)



+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)



+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)



+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)



+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)



+



+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation



+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)



+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)



+



+// 12.4. Vector Single-Width Scaling Shift Instructions



+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)



+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)



+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)



+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)



+



+// 12.5. Vector Narrowing Fixed-Point Clip Instructions



+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)



+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)



+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)



+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)



+



+/* 13. Vector Floating-Point Instructions.  */



+



+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions



+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)



+



+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions



+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)



+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)



+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)



+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)



+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)



+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)



+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)



+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)



+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)



+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)



+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)



+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)



+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)



+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)



+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)



+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)



+



+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions



+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)



+



+// 13.5. Vector Widening Floating-Point Multiply



+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)



+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)



+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)



+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)



+



+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions



+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)



+



+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)



+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)



+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)



+



+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions



+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)



+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)



+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)



+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)



+



+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)



+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)



+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)



+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)



+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)



+



+// 13.8. Vector Floating-Point Square-Root Instruction



+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)



+



+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)



+



+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction



+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)



+



+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction



+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)



+



+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)



+



+// 13.11. Vector Floating-Point MIN/MAX Instructions



+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)



+



+// 13.12. Vector Floating-Point Sign-Injection Instructions



+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)



+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)



+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)



+



+// 13.13. Vector Floating-Point Compare Instructions



+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)



+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)



+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)



+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)



+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)



+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)



+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)



+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)



+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)



+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)



+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)



+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)



+



+// 13.14. Vector Floating-Point Classify Instruction



+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)



+



+// 13.15. Vector Floating-Point Merge Instruction



+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)



+



+// 13.16. Vector Floating-Point Move Instruction



+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)



+



+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions



+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)



+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)



+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)



+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)



+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)



+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)



+



+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)



+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)



+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)



+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)



+



+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions



+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)



+



+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)



+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)



+



+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions



+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)



+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)



+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)



+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)



+



+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)



+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)



+



+/* 14. Vector Reduction Operations.  */



+



+// 14.1. Vector Single-Width Integer Reduction Instructions



+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)



+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)



+



+// 14.2. Vector Widening Integer Reduction Instructions



+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)



+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)



+



+// 14.3. Vector Single-Width Floating-Point Reduction Instructions



+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)



+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)



+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)



+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)



+



+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)



+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)



+



+// 14.4. Vector Widening Floating-Point Reduction Instructions



+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)



+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)



+



+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)



+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)



+



+/* 15. Vector Mask Instructions.  */



+



+// 15.1. Vector Mask-Register Logical Instructions



+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)



+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)



+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)



+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)



+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)



+// 15.2. Vector count population in mask vcpop.m



+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)



+// 15.3. vfirst find-first-set mask bit



+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)



+// 15.4. vmsbf.m set-before-first mask bit



+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)



+// 15.5. vmsif.m set-including-first mask bit



+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)



+// 15.6. vmsof.m set-only-first mask bit



+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)



+// 15.8. Vector Iota Instruction



+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)



+// 15.9. Vector Element Index Instruction



+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)



+



+/* 16. Vector Permutation Instructions.  */



+



+// 16.1. Integer Scalar Move Instructions



+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)



+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)



+



+// 16.2. Floating-Point Scalar Move Instructions



+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)



+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)



+



+// 16.3. Vector Slide Instructions



+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)



+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)



+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)



+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)



+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)



+



+// 16.4. Vector Register Gather Instructions



+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)



+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)



+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)



+



+// 16.5. Vector Compress Instruction



+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)



+



+/* Miscellaneous Vector Functions.  */



+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)



+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)



+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)



+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)



+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)



+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)



+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)



+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)



+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)



+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)



+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)



+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)



+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)



+



+// Tuple types



+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)



+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)



+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)



+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)



+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)



+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)



+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)



+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)



+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)



+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)



+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)



+#undef REQUIRED_EXTENSIONS



+



+#undef DEF_RVV_FUNCTION



+#undef DEF_THEAD_RVV_FUNCTION



\ No newline at end of file



diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc



new file mode 100644



index 00000000000..9d84ed39937



--- /dev/null



+++ b/gcc/config/riscv/thead-vector-builtins.cc



@@ -0,0 +1,746 @@



+/* function_base implementation for RISC-V XTheadVector Extension



+   for GNU compiler.



+   Copyright (C) 2022-2023 Free Software Foundation, Inc.



+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head



+   Semiconductor Co., Ltd.



+



+   This file is part of GCC.



+



+   GCC is free software; you can redistribute it and/or modify it



+   under the terms of the GNU General Public License as published by



+   the Free Software Foundation; either version 3, or (at your option)



+   any later version.



+



+   GCC is distributed in the hope that it will be useful, but



+   WITHOUT ANY WARRANTY; without even the implied warranty of



+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU



+   General Public License for more details.



+



+   You should have received a copy of the GNU General Public License



+   along with GCC; see the file COPYING3.  If not see



+   <http://www.gnu.org/licenses/>.  */



+



+#include "config.h"



+#include "system.h"



+#include "coretypes.h"



+#include "tm.h"



+#include "tree.h"



+#include "rtl.h"



+#include "tm_p.h"



+#include "memmodel.h"



+#include "insn-codes.h"



+#include "optabs.h"



+#include "recog.h"



+#include "expr.h"



+#include "basic-block.h"



+#include "function.h"



+#include "fold-const.h"



+#include "gimple.h"



+#include "gimple-iterator.h"



+#include "gimplify.h"



+#include "explow.h"



+#include "emit-rtl.h"



+#include "tree-vector-builder.h"



+#include "rtx-vector-builder.h"



+#include "riscv-vector-builtins.h"



+#include "riscv-vector-builtins-shapes.h"



+#include "riscv-vector-builtins-bases.h"



+#include "thead-vector-builtins.h"



+



+using namespace riscv_vector;



+



+namespace riscv_vector {



+



+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */



+template<bool VLMAX_P>



+class th_vsetvl : public function_base



+{



+public:



+  bool apply_vl_p () const override



+  {



+    return false;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    if (VLMAX_P)



+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));



+    else



+      e.add_input_operand (0);



+



+    tree type = builtin_types[e.type.index].vector;



+    machine_mode mode = TYPE_MODE (type);



+



+    machine_mode inner_mode = GET_MODE_INNER (mode);



+    /* SEW.  */



+    e.add_input_operand (Pmode,



+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));



+



+    /* LMUL.  */



+    e.add_input_operand (Pmode,



+      gen_int_mode (get_vlmul (mode), Pmode));



+



+    /* TAIL_ANY.  */



+    e.add_input_operand (Pmode,



+ gen_int_mode (get_prefer_tail_policy (), Pmode));



+



+    /* MASK_ANY.  */



+    e.add_input_operand (Pmode,



+ gen_int_mode (get_prefer_mask_policy (), Pmode));



+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));



+  }



+};



+



+/* Implements



+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v



+ * codegen.  */



+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>



+class th_loadstore : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return !STORE_P; }



+  bool apply_mask_policy_p () const override { return !STORE_P; }



+



+  unsigned int call_properties (const function_instance &) const override



+  {



+    if (STORE_P)



+      return CP_WRITE_MEMORY;



+    else



+      return CP_READ_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index pred) const override



+  {



+    if (STORE_P || LST_TYPE == LST_INDEXED)



+      return true;



+    return pred != PRED_TYPE_none;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    if (LST_TYPE == LST_INDEXED)



+      {



+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;



+ if (STORE_P)



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_store (unspec, e.vector_mode (),



+       e.index_mode ()));



+ else



+   {



+     unsigned src_eew_bitsize



+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));



+     unsigned dst_eew_bitsize



+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));



+     if (dst_eew_bitsize == src_eew_bitsize)



+       {



+ return e.use_exact_insn (



+   code_for_pred_th_indexed_load_same_eew (



+     unspec, e.vector_mode ()));



+       }



+     else if (dst_eew_bitsize > src_eew_bitsize)



+       {



+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;



+ switch (factor)



+   {



+   case 2:



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_load_x2_greater_eew (



+ unspec, e.vector_mode ()));



+   case 4:



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_load_x4_greater_eew (



+ unspec, e.vector_mode ()));



+   case 8:



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_load_x8_greater_eew (



+ unspec, e.vector_mode ()));



+   default:



+     gcc_unreachable ();



+   }



+       }



+     else



+       {



+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;



+ switch (factor)



+   {



+   case 2:



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_load_x2_smaller_eew (



+ unspec, e.vector_mode ()));



+   case 4:



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_load_x4_smaller_eew (



+ unspec, e.vector_mode ()));



+   case 8:



+     return e.use_exact_insn (



+       code_for_pred_th_indexed_load_x8_smaller_eew (



+ unspec, e.vector_mode ()));



+   default:



+     gcc_unreachable ();



+   }



+       }



+   }



+      }



+    else if (LST_TYPE == LST_STRIDED)



+      {



+ if (STORE_P)



+   return e.use_contiguous_store_insn (



+     code_for_pred_th_strided_store (e.vector_mode ()));



+ else



+   return e.use_contiguous_load_insn (



+     code_for_pred_th_strided_load (e.vector_mode ()));



+      }



+    else



+      {



+ if (STORE_P)



+   return e.use_contiguous_store_insn (



+     code_for_pred_th_store (e.vector_mode ()));



+ else



+   return e.use_contiguous_load_insn (



+     code_for_pred_mov (e.vector_mode ()));



+      }



+  }



+};



+



+/* Implements vneg/vnot.  */



+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>



+class th_unop : public function_base



+{



+public:



+  bool has_rounding_mode_operand_p () const override



+  {



+    return FRM_OP == HAS_FRM;



+  }



+



+  bool may_require_frm_p () const override { return true; }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));



+  }



+};



+



+/* Implements vnsrl/vnsra.  */



+template<rtx_code CODE>



+class th_vnshift : public function_base



+{



+public:



+  rtx expand (function_expander &e) const override



+  {



+    switch (e.op_info->op)



+      {



+      case OP_TYPE_wx:



+ return e.use_exact_insn (



+   code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));



+      case OP_TYPE_wv:



+ return e.use_exact_insn (



+   code_for_pred_th_narrow (CODE, e.vector_mode ()));



+      default:



+ gcc_unreachable ();



+      }



+  }



+};



+



+/* Implements vncvt.  */



+class th_vncvt_x : public function_base



+{



+public:



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_trunc (e.vector_mode ()));



+  }



+};



+



+/* Implements vnclip/vnclipu.  */



+template<int UNSPEC>



+class th_vnclip : public function_base



+{



+public:



+  bool has_rounding_mode_operand_p () const override { return true; }



+



+  bool may_require_vxrm_p () const override { return true; }



+



+  rtx expand (function_expander &e) const override



+  {



+    switch (e.op_info->op)



+      {



+      case OP_TYPE_wx:



+ return e.use_exact_insn (



+   code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));



+      case OP_TYPE_wv:



+ return e.use_exact_insn (



+   code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));



+      default:



+ gcc_unreachable ();



+      }



+  }



+};



+



+/* Implements vcpop.  */



+class th_vcpop : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+  bool has_merge_operand_p () const override { return false; }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_popcount (e.vector_mode (), Pmode));



+  }



+};



+



+/* Implements vfirst.  */



+class th_vfirst : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+  bool has_merge_operand_p () const override { return false; }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_ffs (e.vector_mode (), Pmode));



+  }



+};



+



+/* Implements vmadc.  */



+class th_vmadc : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+  bool use_mask_predication_p () const override { return false; }



+  bool has_merge_operand_p () const override { return false; }



+



+  rtx expand (function_expander &e) const override



+  {



+    switch (e.op_info->op)



+      {



+      case OP_TYPE_vvm:



+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));



+      case OP_TYPE_vxm:



+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));



+      case OP_TYPE_vv:



+ return e.use_exact_insn (



+   code_for_pred_th_madc_overflow (e.vector_mode ()));



+      case OP_TYPE_vx:



+ return e.use_exact_insn (



+   code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));



+      default:



+ gcc_unreachable ();



+      }



+  }



+};



+



+/* Implements vmsbc.  */



+class th_vmsbc : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+  bool use_mask_predication_p () const override { return false; }



+  bool has_merge_operand_p () const override { return false; }



+



+  rtx expand (function_expander &e) const override



+  {



+    switch (e.op_info->op)



+      {



+      case OP_TYPE_vvm:



+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));



+      case OP_TYPE_vxm:



+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));



+      case OP_TYPE_vv:



+ return e.use_exact_insn (



+   code_for_pred_th_msbc_overflow (e.vector_mode ()));



+      case OP_TYPE_vx:



+ return e.use_exact_insn (



+   code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));



+      default:



+ gcc_unreachable ();



+      }



+  }



+};



+



+/* Implements vfncvt.x.  */



+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>



+class th_vfncvt_x : public function_base



+{



+public:



+  bool has_rounding_mode_operand_p () const override



+  {



+    return FRM_OP == HAS_FRM;



+  }



+



+  bool may_require_frm_p () const override { return true; }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));



+  }



+};



+



+template<enum frm_op_type FRM_OP = NO_FRM>



+class th_vfncvt_f : public function_base



+{



+public:



+  bool has_rounding_mode_operand_p () const override



+  {



+    return FRM_OP == HAS_FRM;



+  }



+



+  bool may_require_frm_p () const override { return true; }



+



+  rtx expand (function_expander &e) const override



+  {



+    if (e.op_info->op == OP_TYPE_f_w)



+      return e.use_exact_insn (



+ code_for_pred_th_trunc (e.vector_mode ()));



+    if (e.op_info->op == OP_TYPE_x_w)



+      return e.use_exact_insn (



+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));



+    if (e.op_info->op == OP_TYPE_xu_w)



+      return e.use_exact_insn (



+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));



+    gcc_unreachable ();



+  }



+};



+



+/* Implements floating-point reduction instructions.  */



+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>



+class th_freducop : public function_base



+{



+public:



+  bool has_rounding_mode_operand_p () const override



+  {



+    return FRM_OP == HAS_FRM;



+  }



+



+  bool may_require_frm_p () const override { return true; }



+



+  bool apply_mask_policy_p () const override { return false; }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));



+  }



+};



+



+class th_vleff : public function_base



+{



+public:



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_READ_MEMORY | CP_WRITE_CSR;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index pred) const override



+  {



+    return pred != PRED_TYPE_none;



+  }



+



+  gimple *fold (gimple_folder &f) const override



+  {



+    return fold_fault_load (f);



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_contiguous_load_insn (



+      code_for_pred_th_fault_load (e.vector_mode ()));



+  }



+};



+



+/* Implements vlseg.v.  */



+class th_vlseg : public function_base



+{



+public:



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_READ_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index pred) const override



+  {



+    return pred != PRED_TYPE_none;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_unit_strided_load (e.vector_mode ()));



+  }



+};



+



+/* Implements vsseg.v.  */



+class th_vsseg : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_WRITE_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index) const override



+  {



+    return true;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_unit_strided_store (e.vector_mode ()));



+  }



+};



+



+/* Implements vlsseg.v.  */



+class th_vlsseg : public function_base



+{



+public:



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_READ_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index pred) const override



+  {



+    return pred != PRED_TYPE_none;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_strided_load (e.vector_mode ()));



+  }



+};



+



+/* Implements vssseg.v.  */



+class th_vssseg : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_WRITE_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index) const override



+  {



+    return true;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_strided_store (e.vector_mode ()));



+  }



+};



+



+template<int UNSPEC>



+class th_seg_indexed_load : public function_base



+{



+public:



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_READ_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index) const override



+  {



+    return true;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_indexed_load (



+ UNSPEC, e.vector_mode (), e.index_mode ()));



+  }



+};



+



+template<int UNSPEC>



+class th_seg_indexed_store : public function_base



+{



+public:



+  bool apply_tail_policy_p () const override { return false; }



+  bool apply_mask_policy_p () const override { return false; }



+



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_WRITE_MEMORY;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index) const override



+  {



+    return true;



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_indexed_store (



+ UNSPEC, e.vector_mode (), e.index_mode ()));



+  }



+};



+



+/* Implements vlsegff.v.  */



+class th_vlsegff : public function_base



+{



+public:



+  unsigned int call_properties (const function_instance &) const override



+  {



+    return CP_READ_MEMORY | CP_WRITE_CSR;



+  }



+



+  bool can_be_overloaded_p (enum predication_type_index pred) const override



+  {



+    return pred != PRED_TYPE_none;



+  }



+



+  gimple *fold (gimple_folder &f) const override



+  {



+    return fold_fault_load (f);



+  }



+



+  rtx expand (function_expander &e) const override



+  {



+    return e.use_exact_insn (



+      code_for_pred_th_fault_load (e.vector_mode ()));



+  }



+};



+



+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;



+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;



+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;



+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;



+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;



+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;



+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;



+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;



+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;



+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;



+static CONSTEXPR const th_unop<NEG> th_vneg_obj;



+static CONSTEXPR const th_unop<NOT> th_vnot_obj;



+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;



+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;



+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;



+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;



+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;



+static CONSTEXPR const th_vcpop th_vcpop_obj;



+static CONSTEXPR const th_vfirst th_vfirst_obj;



+static CONSTEXPR const th_vmadc th_vmadc_obj;



+static CONSTEXPR const th_vmsbc th_vmsbc_obj;



+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;



+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;



+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;



+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;



+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;



+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;



+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;



+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;



+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;



+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;



+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;



+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;



+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;



+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;



+static CONSTEXPR const th_vleff th_vleff_obj;



+static CONSTEXPR const th_vlseg th_vlseg_obj;



+static CONSTEXPR const th_vsseg th_vsseg_obj;



+static CONSTEXPR const th_vlsseg th_vlsseg_obj;



+static CONSTEXPR const th_vssseg th_vssseg_obj;



+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;



+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;



+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;



+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;



+static CONSTEXPR const th_vlsegff th_vlsegff_obj;



+



+/* Declare the function base NAME, pointing it to an instance



+   of class <NAME>_obj.  */



+#define BASE(NAME) \



+  namespace bases { const function_base *const NAME = &NAME##_obj; }



+



+BASE (th_vsetvl)



+BASE (th_vsetvlmax)



+BASE (th_vle)



+BASE (th_vse)



+BASE (th_vlm)



+BASE (th_vsm)



+BASE (th_vlse)



+BASE (th_vsse)



+BASE (th_vluxei8)



+BASE (th_vluxei16)



+BASE (th_vluxei32)



+BASE (th_vluxei64)



+BASE (th_vloxei8)



+BASE (th_vloxei16)



+BASE (th_vloxei32)



+BASE (th_vloxei64)



+BASE (th_vsuxei8)



+BASE (th_vsuxei16)



+BASE (th_vsuxei32)



+BASE (th_vsuxei64)



+BASE (th_vsoxei8)



+BASE (th_vsoxei16)



+BASE (th_vsoxei32)



+BASE (th_vsoxei64)



+BASE (th_vneg)



+BASE (th_vnot)



+BASE (th_vnsrl)



+BASE (th_vnsra)



+BASE (th_vncvt_x)



+BASE (th_vnclip)



+BASE (th_vnclipu)



+BASE (th_vcpop)



+BASE (th_vfirst)



+BASE (th_vmadc)



+BASE (th_vmsbc)



+BASE (th_vfncvt_x)



+BASE (th_vfncvt_x_frm)



+BASE (th_vfncvt_xu)



+BASE (th_vfncvt_xu_frm)



+BASE (th_vfncvt_f)



+BASE (th_vfncvt_f_frm)



+BASE (th_vfredusum)



+BASE (th_vfredusum_frm)



+BASE (th_vfredosum)



+BASE (th_vfredosum_frm)



+BASE (th_vfwredusum)



+BASE (th_vfwredusum_frm)



+BASE (th_vfwredosum)



+BASE (th_vfwredosum_frm)



+BASE (th_vleff)



+BASE (th_vlseg)



+BASE (th_vsseg)



+BASE (th_vlsseg)



+BASE (th_vssseg)



+BASE (th_vluxseg)



+BASE (th_vloxseg)



+BASE (th_vsuxseg)



+BASE (th_vsoxseg)



+BASE (th_vlsegff)



+



+} // end namespace riscv_vector



diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h



new file mode 100644



index 00000000000..d0bf00b8e81



--- /dev/null



+++ b/gcc/config/riscv/thead-vector-builtins.h



@@ -0,0 +1,92 @@



+/* function_base declaration for RISC-V XTheadVector Extension



+   for GNU compiler.



+   Copyright (C) 2022-2023 Free Software Foundation, Inc.



+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head



+   Semiconductor Co., Ltd.



+



+   This file is part of GCC.



+



+   GCC is free software; you can redistribute it and/or modify it



+   under the terms of the GNU General Public License as published by



+   the Free Software Foundation; either version 3, or (at your option)



+   any later version.



+



+   GCC is distributed in the hope that it will be useful, but



+   WITHOUT ANY WARRANTY; without even the implied warranty of



+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU



+   General Public License for more details.



+



+   You should have received a copy of the GNU General Public License



+   along with GCC; see the file COPYING3.  If not see



+   <http://www.gnu.org/licenses/>.  */



+



+#ifndef GCC_THEAD_VECTOR_BUILTINS_H



+#define GCC_THEAD_VECTOR_BUILTINS_H



+



+namespace riscv_vector {



+



+namespace bases {



+extern const function_base *const th_vsetvl;



+extern const function_base *const th_vsetvlmax;



+extern const function_base *const th_vle;



+extern const function_base *const th_vse;



+extern const function_base *const th_vlm;



+extern const function_base *const th_vsm;



+extern const function_base *const th_vlse;



+extern const function_base *const th_vsse;



+extern const function_base *const th_vluxei8;



+extern const function_base *const th_vluxei16;



+extern const function_base *const th_vluxei32;



+extern const function_base *const th_vluxei64;



+extern const function_base *const th_vloxei8;



+extern const function_base *const th_vloxei16;



+extern const function_base *const th_vloxei32;



+extern const function_base *const th_vloxei64;



+extern const function_base *const th_vsuxei8;



+extern const function_base *const th_vsuxei16;



+extern const function_base *const th_vsuxei32;



+extern const function_base *const th_vsuxei64;



+extern const function_base *const th_vsoxei8;



+extern const function_base *const th_vsoxei16;



+extern const function_base *const th_vsoxei32;



+extern const function_base *const th_vsoxei64;



+extern const function_base *const th_vneg;



+extern const function_base *const th_vnot;



+extern const function_base *const th_vnsrl;



+extern const function_base *const th_vnsra;



+extern const function_base *const th_vncvt_x;



+extern const function_base *const th_vnclip;



+extern const function_base *const th_vnclipu;



+extern const function_base *const th_vcpop;



+extern const function_base *const th_vfirst;



+extern const function_base *const th_vmadc;



+extern const function_base *const th_vmsbc;



+extern const function_base *const th_vfncvt_x;



+extern const function_base *const th_vfncvt_x_frm;



+extern const function_base *const th_vfncvt_xu;



+extern const function_base *const th_vfncvt_xu_frm;



+extern const function_base *const th_vfncvt_f;



+extern const function_base *const th_vfncvt_f_frm;



+extern const function_base *const th_vfredusum;



+extern const function_base *const th_vfredusum_frm;



+extern const function_base *const th_vfredosum;



+extern const function_base *const th_vfredosum_frm;



+extern const function_base *const th_vfwredusum;



+extern const function_base *const th_vfwredusum_frm;



+extern const function_base *const th_vfwredosum;



+extern const function_base *const th_vfwredosum_frm;



+extern const function_base *const th_vleff;



+extern const function_base *const th_vlseg;



+extern const function_base *const th_vsseg;



+extern const function_base *const th_vlsseg;



+extern const function_base *const th_vssseg;



+extern const function_base *const th_vluxseg;



+extern const function_base *const th_vloxseg;



+extern const function_base *const th_vsuxseg;



+extern const function_base *const th_vsoxseg;



+extern const function_base *const th_vlsegff;



+}



+



+} // end namespace riscv_vector



+



+#endif



diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md



new file mode 100644



index 00000000000..072fb5e68e1



--- /dev/null



+++ b/gcc/config/riscv/thead-vector.md



@@ -0,0 +1,2574 @@



+(define_c_enum "unspec" [



+  UNSPEC_TH_VWLDST



+])



+



+(define_int_attr th_order [



+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")



+])



+



+(define_int_attr th_reduc_op [



+  (UNSPEC_REDUC_SUM "redsum")



+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")



+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")



+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")



+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")



+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")



+])



+



+(define_code_iterator neg_unop [neg])



+(define_code_iterator not_unop [not])



+



+(define_code_iterator any_float_unop_neg [neg])



+(define_code_iterator any_float_unop_abs [abs])



+



+(define_mode_iterator V_VLS_VT [V VLS VT])



+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])



+



+(define_split



+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")



+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]



+  "TARGET_XTHEADVECTOR"



+  [(const_int 0)]



+  {



+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],



+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));



+    DONE;



+  })



+



+(define_insn_and_split "@pred_th_whole_mov<mode>"



+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")



+ (unspec:V_VLS_VT



+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")



+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")



+    (match_operand 3 "const_1_operand"         "  i, i, i")



+    (reg:SI VL_REGNUM)



+    (reg:SI VTYPE_REGNUM)]



+ UNSPEC_TH_VWLDST))]



+  "TARGET_XTHEADVECTOR"



+  "@



+   vmv.v.v\t%0,%1



+   vle.v\t%0,%1



+   vse.v\t%1,%0"



+  "&& REG_P (operands[0]) && REG_P (operands[1])



+   && REGNO (operands[0]) == REGNO (operands[1])"



+  [(const_int 0)]



+  ""



+  [(set_attr "type" "vimov,vlds,vlds")



+   (set_attr "mode" "<MODE>")



+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))



+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))



+   (set (attr "avl_type_idx") (const_int 3))



+   (set_attr "vl_op_idx" "2")])



+



+(define_insn_and_split "@pred_th_whole_mov<mode>"



+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")



+ (unspec:VB



+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")



+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")



+    (match_operand 3 "const_1_operand"         "  i, i, i")



+    (reg:SI VL_REGNUM)



+    (reg:SI VTYPE_REGNUM)]



+ UNSPEC_TH_VWLDST))]



+  "TARGET_XTHEADVECTOR"



+  "@



+   vmv.v.v\t%0,%1



+   vle.v\t%0,%1



+   vse.v\t%1,%0"



+  "&& REG_P (operands[0]) && REG_P (operands[1])



+   && REGNO (operands[0]) == REGNO (operands[1])"



+  [(const_int 0)]



+  ""



+  [(set_attr "type" "vimov,vlds,vlds")



+   (set_attr "mode" "<MODE>")



+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))



+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))



+   (set (attr "avl_type_idx") (const_int 3))



+   (set_attr "vl_op_idx" "2")



+   (set (attr "sew") (const_int 8))



+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])



+



+(define_expand "@pred_th_mov<mode>"



+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")



+    (if_then_else:V_VLS



+      (unspec:<VM>



+        [(match_operand:<VM> 1 "vector_mask_operand")



+         (match_operand 4 "vector_length_operand")



+         (match_operand 5 "const_int_operand")



+         (match_operand 6 "const_int_operand")



+         (match_operand 7 "const_int_operand")



+         (reg:SI VL_REGNUM)



+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+      (match_operand:V_VLS 3 "vector_move_operand")



+      (match_operand:V_VLS 2 "vector_merge_operand")))]



+  "TARGET_XTHEADVECTOR"



+  {})



+



+(define_insn_and_split "*pred_broadcast<mode>"



+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")



+ (if_then_else:V_VLSI



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")



+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (vec_duplicate:V_VLSI



+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))



+   (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "@



+   vmv.v.x\t%0,%3



+   vmv.v.x\t%0,%3



+   vlse.v\t%0,%3,zero,%1.t



+   vlse.v\t%0,%3,zero,%1.t



+   vlse.v\t%0,%3,zero



+   vlse.v\t%0,%3,zero



+   vmv.s.x\t%0,%3



+   vmv.s.x\t%0,%3"



+  "(register_operand (operands[3], <VEL>mode)



+  || CONST_POLY_INT_P (operands[3]))



+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"



+  [(set (match_dup 0)



+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)



+      (match_dup 5) (match_dup 6) (match_dup 7)



+      (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (vec_duplicate:V_VLSI (match_dup 3))



+   (match_dup 2)))]



+  {



+    gcc_assert (can_create_pseudo_p ());



+    if (CONST_POLY_INT_P (operands[3]))



+      {



+ rtx tmp = gen_reg_rtx (<VEL>mode);



+ emit_move_insn (tmp, operands[3]);



+ operands[3] = tmp;



+      }



+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),



+ GET_MODE_ALIGNMENT (<VEL>mode));



+    m = validize_mem (m);



+    emit_move_insn (m, operands[3]);



+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));



+    operands[3] = m;



+



+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:



+       andi a2,a2,1



+       vsetvl zero,a2,e64



+       vlse64.v  */



+    if (satisfies_constraint_Wb1 (operands[1]))



+      {



+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);



+ operands[1] = CONSTM1_RTX (<VM>mode);



+      }



+  }



+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_broadcast<mode>"



+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")



+ (if_then_else:V_VLSF_ZVFHMIN



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")



+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (vec_duplicate:V_VLSF_ZVFHMIN



+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))



+   (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "@



+   vfmv.v.f\t%0,%3



+   vfmv.v.f\t%0,%3



+   vlse.v\t%0,%3,zero,%1.t



+   vlse.v\t%0,%3,zero,%1.t



+   vlse.v\t%0,%3,zero



+   vlse.v\t%0,%3,zero



+   vfmv.s.f\t%0,%3



+   vfmv.s.f\t%0,%3"



+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")



+   (set_attr "mode" "<MODE>")])



+



+;; vle.v/vse.v,vmv.v.v



+(define_insn_and_split "*pred_th_mov<mode>"



+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")



+    (if_then_else:V_VLS



+      (unspec:<VM>



+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")



+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")



+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")



+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")



+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")



+         (reg:SI VL_REGNUM)



+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")



+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]



+  "(TARGET_XTHEADVECTOR



+    && (register_operand (operands[0], <MODE>mode)



+        || register_operand (operands[3], <MODE>mode)))"



+  "@



+   vle.v\t%0,%3%p1



+   vle.v\t%0,%3



+   vle.v\t%0,%3,%1.t



+   vse.v\t%3,%0%p1



+   vmv.v.v\t%0,%3



+   vmv.v.v\t%0,%3"



+  "&& register_operand (operands[0], <MODE>mode)



+   && register_operand (operands[3], <MODE>mode)



+   && satisfies_constraint_vu (operands[2])



+   && INTVAL (operands[7]) == riscv_vector::VLMAX"



+  [(set (match_dup 0) (match_dup 3))]



+  ""



+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn_and_split "@pred_th_mov<mode>"



+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")



+ (if_then_else:VB_VLS



+   (unspec:VB_VLS



+     [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")



+      (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")



+      (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")



+   (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]



+  "TARGET_XTHEADVECTOR"



+  "@



+   #



+   #



+   vmcpy.m\t%0,%3



+   vmclr.m\t%0



+   vmset.m\t%0"



+  "&& !reload_completed"



+  [(const_int 0)]



+  {



+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))



+        || (REG_P (operands[0]) && REG_P (operands[3])



+     && INTVAL (operands[5]) == riscv_vector::VLMAX))



+      {



+ emit_move_insn (operands[0], operands[3]);



+ DONE;



+      }



+



+    FAIL;



+  }



+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_store<mode>"



+  [(set (match_operand:V 0 "memory_operand"                 "+m")



+ (if_then_else:V



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")



+      (match_operand 3 "vector_length_operand"    "   rK")



+      (match_operand 4 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operand:V 2 "register_operand"         "    vr")



+   (match_dup 0)))]



+  "TARGET_XTHEADVECTOR"



+  "vse.v\t%2,%0%p1"



+  [(set_attr "type" "vste")



+   (set_attr "mode" "<MODE>")



+   (set (attr "avl_type_idx") (const_int 4))



+   (set_attr "vl_op_idx" "3")])



+



+(define_insn "@pred_th_strided_load<mode>"



+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")



+ (if_then_else:V



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")



+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")



+      (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")



+      (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")



+      (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V



+     [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")



+      (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)



+   (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]



+  "TARGET_XTHEADVECTOR"



+  "@



+  vlse.v\t%0,%3,%z4%p1



+  vlse.v\t%0,%3,%z4



+  vlse.v\t%0,%3,%z4,%1.t



+  vle.v\t%0,%3%p1



+  vle.v\t%0,%3



+  vle.v\t%0,%3,%1.t"



+  [(set_attr "type" "vlds")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_strided_store<mode>"



+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")



+ (if_then_else:V



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK,       rK")



+      (match_operand 5 "const_int_operand"        "    i,        i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V



+     [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")



+      (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)



+   (match_dup 0)))]



+  "TARGET_XTHEADVECTOR"



+  "@



+  vsse.v\t%3,%0,%z2%p1



+  vse.v\t%3,%0%p1"



+  [(set_attr "type" "vsts")



+   (set_attr "mode" "<MODE>")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+



+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"



+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")



+ (if_then_else:V



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")



+      (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")



+      (match_operand 6 "const_int_operand"         "  i,  i, i,  i")



+      (match_operand 7 "const_int_operand"         "  i,  i, i,  i")



+      (match_operand 8 "const_int_operand"         "  i,  i, i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V



+     [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)



+   (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+;; DEST eew is greater than SOURCE eew.



+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"



+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")



+ (if_then_else:VEEWEXT2



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "    i,    i")



+      (match_operand 7 "const_int_operand"                      "    i,    i")



+      (match_operand 8 "const_int_operand"                      "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VEEWEXT2



+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)



+   (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"



+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")



+ (if_then_else:VEEWEXT4



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "    i,    i")



+      (match_operand 7 "const_int_operand"                      "    i,    i")



+      (match_operand 8 "const_int_operand"                      "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VEEWEXT4



+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)



+   (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"



+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")



+ (if_then_else:VEEWEXT8



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "    i,    i")



+      (match_operand 7 "const_int_operand"                      "    i,    i")



+      (match_operand 8 "const_int_operand"                      "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VEEWEXT8



+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)



+   (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+;; DEST eew is smaller than SOURCE eew.



+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"



+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")



+ (if_then_else:VEEWTRUNC2



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VEEWTRUNC2



+     [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)



+   (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"



+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")



+ (if_then_else:VEEWTRUNC4



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VEEWTRUNC4



+     [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)



+   (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"



+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")



+ (if_then_else:VEEWTRUNC8



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VEEWTRUNC8



+     [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)



+   (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxe.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vld<order>x")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")



+    (match_operand:RATIO64I 2 "register_operand" "  vr")



+    (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO64:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")



+    (match_operand:RATIO32I 2 "register_operand" "  vr")



+    (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO32:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")



+    (match_operand:RATIO16I 2 "register_operand" "  vr")



+    (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO16:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")



+    (match_operand:RATIO8I 2 "register_operand" "  vr")



+    (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO8:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")



+    (match_operand:RATIO4I 2 "register_operand" "  vr")



+    (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO4:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")



+    (match_operand:RATIO2I 2 "register_operand"  "  vr")



+    (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO2:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")



+    (match_operand:RATIO1 2 "register_operand"   "  vr")



+    (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vstux")



+   (set_attr "mode" "<RATIO1:MODE>")])



+



+(define_insn "@pred_th_popcount<VB:mode><P:mode>"



+  [(set (match_operand:P 0 "register_operand"               "=r")



+ (popcount:P



+   (unspec:VB



+     [(and:VB



+        (match_operand:VB 1 "vector_mask_operand" "vmWc1")



+        (match_operand:VB 2 "register_operand"    "   vr"))



+      (match_operand 3 "vector_length_operand"    "   rK")



+      (match_operand 4 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]



+  "TARGET_XTHEADVECTOR"



+  "vmpopc.m\t%0,%2%p1"



+  [(set_attr "type" "vmpop")



+   (set_attr "mode" "<VB:MODE>")])



+



+(define_insn "@pred_th_ffs<VB:mode><P:mode>"



+  [(set (match_operand:P 0 "register_operand"                 "=r")



+ (plus:P



+   (ffs:P



+     (unspec:VB



+       [(and:VB



+          (match_operand:VB 1 "vector_mask_operand" "vmWc1")



+          (match_operand:VB 2 "register_operand"    "   vr"))



+        (match_operand 3 "vector_length_operand"    "   rK")



+        (match_operand 4 "const_int_operand"        "    i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))



+   (const_int -1)))]



+  "TARGET_XTHEADVECTOR"



+  "vmfirst.m\t%0,%2%p1"



+  [(set_attr "type" "vmffs")



+   (set_attr "mode" "<VB:MODE>")])



+



+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"



+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")



+ (if_then_else:<VNCONVERT>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")



+      (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:<VNCONVERT>



+      [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)



+   (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"



+  [(set_attr "type" "vfncvtftoi")



+   (set_attr "mode" "<VNCONVERT>")



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])



+



+(define_insn "@pred_th_narrow_<float_cvt><mode>"



+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")



+ (if_then_else:<VNCONVERT>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")



+      (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+   (any_float:<VNCONVERT>



+      (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))



+   (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vfncvt.f.x<u>.v\t%0,%3%p1"



+  [(set_attr "type" "vfncvtitof")



+   (set_attr "mode" "<VNCONVERT>")



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])



+



+(define_insn "@pred_th_narrow_<optab><mode>"



+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")



+ (if_then_else:<V_DOUBLE_TRUNC>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (truncate:<V_DOUBLE_TRUNC>



+     (any_shiftrt:VWEXTI



+      (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")



+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))



+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vn<insn>.v%o4\t%0,%3,%v4%p1"



+  [(set_attr "type" "vnshift")



+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])



+



+(define_insn "@pred_th_narrow_<optab><mode>_scalar"



+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")



+ (if_then_else:<V_DOUBLE_TRUNC>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (truncate:<V_DOUBLE_TRUNC>



+     (any_shiftrt:VWEXTI



+      (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")



+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))



+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vn<insn>.v%o4\t%0,%3,%4%p1"



+  [(set_attr "type" "vnshift")



+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])



+



+(define_insn "@pred_th_trunc<mode>"



+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")



+ (if_then_else:<V_DOUBLE_TRUNC>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (truncate:<V_DOUBLE_TRUNC>



+     (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))



+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vnsrl.vx\t%0,%3,x0%p1"



+  [(set_attr "type" "vnshift")



+   (set_attr "mode" "<V_DOUBLE_TRUNC>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+(define_insn "@pred_th_trunc<mode>"



+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")



+ (if_then_else:<V_DOUBLE_TRUNC>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+   (float_truncate:<V_DOUBLE_TRUNC>



+      (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))



+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vfncvt.f.f.v\t%0,%3%p1"



+  [(set_attr "type" "vfncvtftof")



+   (set_attr "mode" "<V_DOUBLE_TRUNC>")



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])



+



+(define_insn "@pred_th_fault_load<mode>"



+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")



+ (if_then_else:V



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")



+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")



+      (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")



+      (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")



+      (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V



+     [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)



+   (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))



+   (set (reg:SI VL_REGNUM)



+   (unspec:SI



+     [(if_then_else:V



+        (unspec:<VM>



+ [(match_dup 1) (match_dup 4) (match_dup 5)



+ (match_dup 6) (match_dup 7)



+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+        (unspec:V [(match_dup 3)] UNSPEC_VLEFF)



+        (match_dup 2))] UNSPEC_MODIFY_VL))]



+  "TARGET_XTHEADVECTOR"



+  "vleff.v\t%0,%3%p1"



+  [(set_attr "type" "vldff")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_unit_strided_load<mode>"



+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")



+ (if_then_else:VT



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")



+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")



+      (match_operand 5 "const_int_operand"        "    i,     i,     i")



+      (match_operand 6 "const_int_operand"        "    i,     i,     i")



+      (match_operand 7 "const_int_operand"        "    i,     i,     i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VT



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")



+      (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)



+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]



+  "TARGET_XTHEADVECTOR"



+  "vlseg<nf>e.v\t%0,(%z3)%p1"



+  [(set_attr "type" "vlsegde")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_unit_strided_store<mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+       (match_operand 3 "vector_length_operand"    "   rK")



+       (match_operand 4 "const_int_operand"        "    i")



+       (reg:SI VL_REGNUM)



+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")



+    (match_operand:VT 2 "register_operand"         "   vr")



+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]



+  "TARGET_XTHEADVECTOR"



+  "vsseg<nf>e.v\t%2,(%z1)%p0"



+  [(set_attr "type" "vssegte")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_strided_load<mode>"



+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")



+ (if_then_else:VT



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")



+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")



+      (match_operand 6 "const_int_operand"        "    i,     i,     i")



+      (match_operand 7 "const_int_operand"        "    i,     i,     i")



+      (match_operand 8 "const_int_operand"        "    i,     i,     i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VT



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")



+      (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")



+      (mem:BLK (scratch))] UNSPEC_STRIDED)



+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]



+  "TARGET_XTHEADVECTOR"



+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"



+  [(set_attr "type" "vlsegds")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_strided_store<mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+       (match_operand 4 "vector_length_operand"    "   rK")



+       (match_operand 5 "const_int_operand"        "    i")



+       (reg:SI VL_REGNUM)



+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")



+    (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")



+    (match_operand:VT 3 "register_operand"         "   vr")



+    (mem:BLK (scratch))] UNSPEC_STRIDED))]



+  "TARGET_XTHEADVECTOR"



+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"



+  [(set_attr "type" "vssegts")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_fault_load<mode>"



+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")



+ (if_then_else:VT



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")



+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")



+      (match_operand 5 "const_int_operand"        "    i,     i,     i")



+      (match_operand 6 "const_int_operand"        "    i,     i,     i")



+      (match_operand 7 "const_int_operand"        "    i,     i,     i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:VT



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")



+      (mem:BLK (scratch))] UNSPEC_VLEFF)



+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))



+   (set (reg:SI VL_REGNUM)



+        (unspec:SI



+          [(if_then_else:VT



+      (unspec:<VM>



+        [(match_dup 1) (match_dup 4) (match_dup 5)



+         (match_dup 6) (match_dup 7)



+         (reg:SI VL_REGNUM)



+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+      (unspec:VT



+         [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)



+      (match_dup 2))] UNSPEC_MODIFY_VL))]



+  "TARGET_XTHEADVECTOR"



+  "vlseg<nf>eff.v\t%0,(%z3)%p1"



+  [(set_attr "type" "vlsegdff")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"



+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")



+ (if_then_else:V1T



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"    "   rK,   rK")



+      (match_operand 6 "const_int_operand"        "    i,    i")



+      (match_operand 7 "const_int_operand"        "    i,    i")



+      (match_operand 8 "const_int_operand"        "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V1T



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)



+   (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vlsegd<order>x")



+   (set_attr "mode" "<V1T:MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"



+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")



+ (if_then_else:V2T



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"    "   rK,   rK")



+      (match_operand 6 "const_int_operand"        "    i,    i")



+      (match_operand 7 "const_int_operand"        "    i,    i")



+      (match_operand 8 "const_int_operand"        "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V2T



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)



+   (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vlsegd<order>x")



+   (set_attr "mode" "<V2T:MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"



+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")



+ (if_then_else:V4T



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"    "   rK,   rK")



+      (match_operand 6 "const_int_operand"        "    i,    i")



+      (match_operand 7 "const_int_operand"        "    i,    i")



+      (match_operand 8 "const_int_operand"        "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V4T



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)



+   (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vlsegd<order>x")



+   (set_attr "mode" "<V4T:MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"



+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")



+ (if_then_else:V8T



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"    "   rK,   rK")



+      (match_operand 6 "const_int_operand"        "    i,    i")



+      (match_operand 7 "const_int_operand"        "    i,    i")



+      (match_operand 8 "const_int_operand"        "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V8T



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)



+   (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vlsegd<order>x")



+   (set_attr "mode" "<V8T:MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"



+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")



+ (if_then_else:V16T



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"    "   rK,   rK")



+      (match_operand 6 "const_int_operand"        "    i,    i")



+      (match_operand 7 "const_int_operand"        "    i,    i")



+      (match_operand 8 "const_int_operand"        "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V16T



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)



+   (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vlsegd<order>x")



+   (set_attr "mode" "<V16T:MODE>")])



+



+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"



+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")



+ (if_then_else:V32T



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"    "   rK,   rK")



+      (match_operand 6 "const_int_operand"        "    i,    i")



+      (match_operand 7 "const_int_operand"        "    i,    i")



+      (match_operand 8 "const_int_operand"        "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:V32T



+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")



+      (mem:BLK (scratch))



+      (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)



+   (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]



+  "TARGET_XTHEADVECTOR"



+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"



+  [(set_attr "type" "vlsegd<order>x")



+   (set_attr "mode" "<V32T:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")



+    (match_operand:RATIO64I 2 "register_operand"       "   vr")



+    (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vssegtux")



+   (set_attr "mode" "<V1T:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")



+    (match_operand:RATIO32I 2 "register_operand"       "   vr")



+    (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vssegtux")



+   (set_attr "mode" "<V2T:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")



+    (match_operand:RATIO16I 2 "register_operand"       "   vr")



+    (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vssegtux")



+   (set_attr "mode" "<V4T:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")



+    (match_operand:RATIO8I 2 "register_operand"       "   vr")



+    (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vssegtux")



+   (set_attr "mode" "<V8T:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")



+    (match_operand:RATIO4I 2 "register_operand"      "   vr")



+    (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"



+  [(set_attr "type" "vssegtux")



+   (set_attr "mode" "<V16T:MODE>")])



+



+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"



+  [(set (mem:BLK (scratch))



+ (unspec:BLK



+   [(unspec:<VM>



+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")



+      (match_operand 4 "vector_length_operand"    "   rK")



+      (match_operand 5 "const_int_operand"        "    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")



+    (match_operand:RATIO2I 2 "register_operand"      "   vr")



+    (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]



+  "TARGET_XTHEADVECTOR"



+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";



+  [(set_attr "type" "vssegtux")



+   (set_attr "mode" "<V32T:MODE>")])



+



+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")



+ (if_then_else:V_VLSF



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (any_float_unop_neg:V_VLSF



+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))



+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vfsgnjn.vv\t%0,%3,%3%p1"



+  [(set_attr "type" "<float_insn_type>")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")



+ (if_then_else:V_VLSF



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (any_float_unop_abs:V_VLSF



+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))



+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vfsgnjx.vv\t%0,%3,%3%p1"



+  [(set_attr "type" "<float_insn_type>")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")



+ (if_then_else:V_VLSI



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")



+      (match_operand 5 "const_int_operand"        " i, i,  i,  i")



+      (match_operand 6 "const_int_operand"        " i, i,  i,  i")



+      (match_operand 7 "const_int_operand"        " i, i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (not_unop:V_VLSI



+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))



+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vnot.v\t%0,%3%p1"



+  [(set_attr "type" "vialu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")



+ (if_then_else:V_VLSI



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")



+      (match_operand 5 "const_int_operand" " i, i,  i,  i")



+      (match_operand 6 "const_int_operand" " i, i,  i,  i")



+      (match_operand 7 "const_int_operand" " i, i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (neg_unop:V_VLSI



+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))



+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vrsub.vx\t%0,%3,x0%p1"



+  [(set_attr "type" "vialu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")



+ (if_then_else:V_VLSF



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+   (any_float_unop:V_VLSF



+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))



+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vf<insn>.v\t%0,%3%p1"



+  [(set_attr "type" "<float_insn_type>")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])



+



+(define_insn "@pred_th_narrow_clip<v_su><mode>"



+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")



+ (if_then_else:<V_DOUBLE_TRUNC>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:<V_DOUBLE_TRUNC>



+     [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")



+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)



+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"



+  [(set_attr "type" "vnclip")



+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])



+



+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"



+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")



+ (if_then_else:<V_DOUBLE_TRUNC>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")



+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")



+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)



+   (unspec:<V_DOUBLE_TRUNC>



+     [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")



+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)



+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR"



+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"



+  [(set_attr "type" "vnclip")



+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])



+



+;; Float Reduction Sum (vfred[ou]sum.vs)



+(define_insn "@pred_th_<th_reduc_op><mode>"



+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")



+ (unspec:<V_LMUL1>



+   [(unspec:<VM>



+     [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")



+      (match_operand               5 "vector_length_operand" "   rK,   rK")



+      (match_operand               6 "const_int_operand"     "    i,    i")



+      (match_operand               7 "const_int_operand"     "    i,    i")



+      (match_operand               8 "const_int_operand"     "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+           (unspec:<V_LMUL1> [



+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")



+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")



+           ] ANY_FREDUC_SUM)



+    (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]



+  "TARGET_XTHEADVECTOR"



+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"



+  [(set_attr "type" "vfred<order>")



+   (set_attr "mode" "<MODE>")



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])



+



+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)



+(define_insn "@pred_th_<th_reduc_op><mode>"



+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")



+ (unspec:<V_EXT_LMUL1>



+   [(unspec:<VM>



+     [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")



+      (match_operand                5 "vector_length_operand" "   rK,   rK")



+      (match_operand                6 "const_int_operand"     "    i,    i")



+      (match_operand                7 "const_int_operand"     "    i,    i")



+      (match_operand                8 "const_int_operand"     "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+           (unspec:<V_EXT_LMUL1> [



+      (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")



+      (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")



+           ] ANY_FWREDUC_SUM)



+    (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]



+  "TARGET_XTHEADVECTOR"



+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"



+  [(set_attr "type" "vfwred<order>")



+   (set_attr "mode" "<MODE>")



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])



+



+(define_insn "@pred_th_madc<mode>"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")



+ (unspec:<VM>



+    [(plus:VI



+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")



+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))



+     (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")



+        (match_operand 5 "const_int_operand"     "   i,   i,   i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.v%o2m\t%0,%1,%v2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_insn "@pred_th_msbc<mode>"



+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")



+ (unspec:<VM>



+    [(minus:VI



+      (match_operand:VI 1 "register_operand"     "  vr")



+      (match_operand:VI 2 "register_operand"     " vr"))



+     (match_operand:<VM> 3 "register_operand"    " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand" " rK")



+        (match_operand 5 "const_int_operand"     "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vvm\t%0,%1,%2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_insn "@pred_th_madc<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(plus:VI_QHS



+      (vec_duplicate:VI_QHS



+        (match_operand:<VEL> 2 "register_operand" "  r"))



+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))



+     (match_operand:<VM> 3 "register_operand"     " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand"  " rK")



+        (match_operand 5 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.vxm\t%0,%1,%2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_insn "@pred_th_msbc<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(minus:VI_QHS



+      (vec_duplicate:VI_QHS



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))



+     (match_operand:<VM> 3 "register_operand"     " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand"  " rK")



+        (match_operand 5 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vxm\t%0,%1,%z2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_expand "@pred_th_madc<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand")



+ (unspec:<VM>



+    [(plus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_int_operand"))



+      (match_operand:VI_D 1 "register_operand"))



+     (match_operand:<VM> 3 "register_operand")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand")



+        (match_operand 5 "const_int_operand")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]



+  "TARGET_XTHEADVECTOR"



+{



+  if (riscv_vector::sew64_scalar_helper (



+ operands,



+ /* scalar op */&operands[2],



+ /* vl */operands[4],



+ <MODE>mode,



+ riscv_vector::simm5_p (operands[2]),



+ [] (rtx *operands, rtx boardcast_scalar) {



+   emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],



+        boardcast_scalar, operands[3], operands[4], operands[5]));



+        },



+ (riscv_vector::avl_type) INTVAL (operands[5])))



+    DONE;



+})



+



+(define_insn "*pred_th_madc<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(plus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_D 1 "register_operand"    "  vr"))



+     (match_operand:<VM> 3 "register_operand"     " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand"  " rK")



+        (match_operand 5 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.vxm\t%0,%1,%z2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_insn "*pred_th_madc<mode>_extended_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")



+ (unspec:<VM>



+    [(plus:VI_D



+      (vec_duplicate:VI_D



+        (sign_extend:<VEL>



+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))



+      (match_operand:VI_D 1 "register_operand"         "  vr"))



+     (match_operand:<VM> 3 "register_operand"          " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand"       " rK")



+        (match_operand 5 "const_int_operand"           "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.vxm\t%0,%1,%z2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_expand "@pred_th_msbc<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand")



+ (unspec:<VM>



+    [(minus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_int_operand"))



+      (match_operand:VI_D 1 "register_operand"))



+     (match_operand:<VM> 3 "register_operand")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand")



+        (match_operand 5 "const_int_operand")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]



+  "TARGET_XTHEADVECTOR"



+{



+  if (riscv_vector::sew64_scalar_helper (



+ operands,



+ /* scalar op */&operands[2],



+ /* vl */operands[4],



+ <MODE>mode,



+ false,



+ [] (rtx *operands, rtx boardcast_scalar) {



+   emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],



+        boardcast_scalar, operands[3], operands[4], operands[5]));



+        },



+ (riscv_vector::avl_type) INTVAL (operands[5])))



+    DONE;



+})



+



+(define_insn "*pred_th_msbc<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(minus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_D 1 "register_operand"    "  vr"))



+     (match_operand:<VM> 3 "register_operand"     " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand"  " rK")



+        (match_operand 5 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vxm\t%0,%1,%z2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_insn "*pred_th_msbc<mode>_extended_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")



+ (unspec:<VM>



+    [(minus:VI_D



+      (vec_duplicate:VI_D



+        (sign_extend:<VEL>



+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))



+      (match_operand:VI_D 1 "register_operand"         "  vr"))



+     (match_operand:<VM> 3 "register_operand"          " vm")



+     (unspec:<VM>



+       [(match_operand 4 "vector_length_operand"       " rK")



+        (match_operand 5 "const_int_operand"           "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vxm\t%0,%1,%z2,%3"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "avl_type_idx") (const_int 5))])



+



+(define_insn "@pred_th_madc<mode>_overflow"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")



+ (unspec:<VM>



+    [(plus:VI



+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")



+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")



+        (match_operand 4 "const_int_operand"     "   i,   i,   i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.v%o2\t%0,%1,%v2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_insn "@pred_th_msbc<mode>_overflow"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(minus:VI



+      (match_operand:VI 1 "register_operand"     "   vr")



+      (match_operand:VI 2 "register_operand"     "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand" "  rK")



+        (match_operand 4 "const_int_operand"     "   i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vv\t%0,%1,%2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_insn "@pred_th_madc<mode>_overflow_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(plus:VI_QHS



+      (vec_duplicate:VI_QHS



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand"  " rK")



+        (match_operand 4 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.vx\t%0,%1,%z2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_insn "@pred_th_msbc<mode>_overflow_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(minus:VI_QHS



+      (vec_duplicate:VI_QHS



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand"  " rK")



+        (match_operand 4 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vx\t%0,%1,%z2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_expand "@pred_th_madc<mode>_overflow_scalar"



+  [(set (match_operand:<VM> 0 "register_operand")



+ (unspec:<VM>



+    [(plus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_int_operand"))



+      (match_operand:VI_D 1 "register_operand"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand")



+        (match_operand 4 "const_int_operand")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+{



+  if (riscv_vector::sew64_scalar_helper (



+ operands,



+ /* scalar op */&operands[2],



+ /* vl */operands[3],



+ <MODE>mode,



+ riscv_vector::simm5_p (operands[2]),



+ [] (rtx *operands, rtx boardcast_scalar) {



+   emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],



+        boardcast_scalar, operands[3], operands[4]));



+        },



+ (riscv_vector::avl_type) INTVAL (operands[4])))



+    DONE;



+})



+



+(define_insn "*pred_th_madc<mode>_overflow_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(plus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_D 1 "register_operand"    "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand"  " rK")



+        (match_operand 4 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.vx\t%0,%1,%z2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")



+ (unspec:<VM>



+    [(plus:VI_D



+      (vec_duplicate:VI_D



+        (sign_extend:<VEL>



+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))



+      (match_operand:VI_D 1 "register_operand"         "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand"       " rK")



+        (match_operand 4 "const_int_operand"           "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmadc.vx\t%0,%1,%z2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_expand "@pred_th_msbc<mode>_overflow_scalar"



+  [(set (match_operand:<VM> 0 "register_operand")



+ (unspec:<VM>



+    [(minus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_int_operand"))



+      (match_operand:VI_D 1 "register_operand"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand")



+        (match_operand 4 "const_int_operand")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+{



+  if (riscv_vector::sew64_scalar_helper (



+ operands,



+ /* scalar op */&operands[2],



+ /* vl */operands[3],



+ <MODE>mode,



+ false,



+ [] (rtx *operands, rtx boardcast_scalar) {



+   emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],



+        boardcast_scalar, operands[3], operands[4]));



+        },



+ (riscv_vector::avl_type) INTVAL (operands[4])))



+    DONE;



+})



+



+(define_insn "*pred_th_msbc<mode>_overflow_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")



+ (unspec:<VM>



+    [(minus:VI_D



+      (vec_duplicate:VI_D



+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))



+      (match_operand:VI_D 1 "register_operand"    "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand"  " rK")



+        (match_operand 4 "const_int_operand"      "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vx\t%0,%1,%z2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")



+ (unspec:<VM>



+    [(minus:VI_D



+      (vec_duplicate:VI_D



+        (sign_extend:<VEL>



+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))



+      (match_operand:VI_D 1 "register_operand"         "  vr"))



+     (unspec:<VM>



+       [(match_operand 3 "vector_length_operand"      " rK")



+        (match_operand 4 "const_int_operand"          "  i")



+        (reg:SI VL_REGNUM)



+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]



+  "TARGET_XTHEADVECTOR"



+  "vmsbc.vx\t%0,%1,%z2"



+  [(set_attr "type" "vicalu")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "3")



+   (set (attr "avl_type_idx") (const_int 4))])



+



+(define_insn "*th_vsetvl<mode>"



+  [(set (match_operand:P 0 "register_operand" "=r")



+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")



+    (match_operand 2 "const_int_operand" "i")



+    (match_operand 3 "const_int_operand" "i")



+    (match_operand 4 "const_int_operand" "i")



+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))



+   (set (reg:SI VL_REGNUM)



+ (unspec:SI [(match_dup 1)



+     (match_dup 2)



+     (match_dup 3)] UNSPEC_VSETVL))



+   (set (reg:SI VTYPE_REGNUM)



+ (unspec:SI [(match_dup 2)



+     (match_dup 3)



+     (match_dup 4)



+     (match_dup 5)] UNSPEC_VSETVL))]



+  "TARGET_XTHEADVECTOR"



+  "vsetvli\t%0,%1,e%2,%m3"



+  [(set_attr "type" "vsetvl")



+   (set_attr "mode" "<MODE>")



+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))



+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))



+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))



+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])



+



+;; vsetvl zero,zero,vtype instruction.



+;; This pattern has no side effects and does not set X0 register.



+(define_insn "*th_vsetvl_vtype_change_only"



+  [(set (reg:SI VTYPE_REGNUM)



+ (unspec:SI



+   [(match_operand 0 "const_int_operand" "i")



+    (match_operand 1 "const_int_operand" "i")



+    (match_operand 2 "const_int_operand" "i")



+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]



+  "TARGET_XTHEADVECTOR"



+  "vsetvli\tzero,zero,e%0,%m1"



+  [(set_attr "type" "vsetvl")



+   (set_attr "mode" "SI")



+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))



+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))



+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))



+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])



+



+;; vsetvl zero,rs1,vtype instruction.



+;; The reason we need this pattern since we should avoid setting X0 register



+;; in vsetvl instruction pattern.



+(define_insn "*th_vsetvl_discard_result<mode>"



+  [(set (reg:SI VL_REGNUM)



+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")



+     (match_operand 1 "const_int_operand" "i")



+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))



+   (set (reg:SI VTYPE_REGNUM)



+ (unspec:SI [(match_dup 1)



+     (match_dup 2)



+     (match_operand 3 "const_int_operand" "i")



+     (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]



+  "TARGET_XTHEADVECTOR"



+  "vsetvli\tzero,%0,e%1,%m2"



+  [(set_attr "type" "vsetvl")



+   (set_attr "mode" "<MODE>")



+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))



+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))



+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))



+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])



+



+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.



+;; Since we have many optmization passes from "expand" to "reload_completed",



+;; such pattern can allow us gain benefits of these optimizations.



+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"



+  [(set (match_operand:P 0 "register_operand" "=r")



+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")



+    (match_operand 2 "const_int_operand" "i")



+    (match_operand 3 "const_int_operand" "i")



+    (match_operand 4 "const_int_operand" "i")



+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]



+  "TARGET_XTHEADVECTOR"



+  "#"



+  "&& epilogue_completed"



+  [(parallel



+    [(set (match_dup 0)



+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)



+      (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))



+     (set (reg:SI VL_REGNUM)



+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))



+     (set (reg:SI VTYPE_REGNUM)



+   (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)



+       (match_dup 5)] UNSPEC_VSETVL))])]



+  ""



+  [(set_attr "type" "vsetvl")



+   (set_attr "mode" "SI")])



+



+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"        "   0")



+      (match_operand 5 "vector_length_operand"        "  rK")



+      (match_operand 6 "const_int_operand"            "   i")



+      (match_operand 7 "const_int_operand"            "   i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "comparison_except_ltge_operator"



+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")



+       (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_cmp<mode>"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_ltge_operator"



+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")



+       (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.v%o5\t%0,%4,%v5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_cmp<mode>_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_ltge_operator"



+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")



+       (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.v%o5\t%0,%4,%v5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"        "   0")



+      (match_operand 5 "vector_length_operand"        "  rK")



+      (match_operand 6 "const_int_operand"            "   i")



+      (match_operand 7 "const_int_operand"            "   i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "ltge_operator"



+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")



+       (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_ltge<mode>"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "ltge_operator"



+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")



+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.v%o5\t%0,%4,%v5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_ltge<mode>_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "ltge_operator"



+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")



+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.v%o5\t%0,%4,%v5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"          "  0")



+      (match_operand 5 "vector_length_operand"          " rK")



+      (match_operand 6 "const_int_operand"              "  i")



+      (match_operand 7 "const_int_operand"              "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")



+       (vec_duplicate:V_VLSI_QHS



+         (match_operand:<VEL> 4 "register_operand"      "  r"))])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.vx\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_cmp<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")



+       (vec_duplicate:V_VLSI_QHS



+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_cmp<mode>_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")



+       (vec_duplicate:V_VLSI_QHS



+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])



+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"           "  0")



+      (match_operand 5 "vector_length_operand"           " rK")



+      (match_operand 6 "const_int_operand"               "  i")



+      (match_operand 7 "const_int_operand"               "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "equality_operator"



+      [(vec_duplicate:V_VLSI_QHS



+         (match_operand:<VEL> 4 "register_operand"       "  r"))



+       (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.vx\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_eqne<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSI_QHS



+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))



+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_eqne<mode>_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSI_QHS



+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))



+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"           "  0")



+      (match_operand 5 "vector_length_operand"           " rK")



+      (match_operand 6 "const_int_operand"               "  i")



+      (match_operand 7 "const_int_operand"               "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_D 3 "register_operand"          " vr")



+       (vec_duplicate:V_VLSI_D



+         (match_operand:<VEL> 4 "register_operand"       "  r"))])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.vx\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"           "  0")



+      (match_operand 5 "vector_length_operand"           " rK")



+      (match_operand 6 "const_int_operand"               "  i")



+      (match_operand 7 "const_int_operand"               "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "equality_operator"



+      [(vec_duplicate:V_VLSI_D



+         (match_operand:<VEL> 4 "register_operand"       "  r"))



+       (match_operand:V_VLSI_D 3 "register_operand"          " vr")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.vx\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_cmp<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")



+       (vec_duplicate:V_VLSI_D



+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_cmp<mode>_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")



+       (vec_duplicate:V_VLSI_D



+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])



+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_eqne<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSI_D



+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))



+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_eqne<mode>_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSI_D



+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))



+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"          "  0")



+      (match_operand 5 "vector_length_operand"          " rK")



+      (match_operand 6 "const_int_operand"              "  i")



+      (match_operand 7 "const_int_operand"              "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_D 3 "register_operand"         " vr")



+       (vec_duplicate:V_VLSI_D



+         (sign_extend:<VEL>



+           (match_operand:<VSUBEL> 4 "register_operand" "  r")))])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.vx\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_cmp<mode>_extended_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"          "   rK,   rK")



+      (match_operand 7 "const_int_operand"              "    i,    i")



+      (match_operand 8 "const_int_operand"              "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")



+       (vec_duplicate:V_VLSI_D



+         (sign_extend:<VEL>



+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])



+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "comparison_except_eqge_operator"



+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")



+       (vec_duplicate:V_VLSI_D



+         (sign_extend:<VEL>



+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])



+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"            "  0")



+      (match_operand 5 "vector_length_operand"            " rK")



+      (match_operand 6 "const_int_operand"                "  i")



+      (match_operand 7 "const_int_operand"                "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "equality_operator"



+      [(vec_duplicate:V_VLSI_D



+         (sign_extend:<VEL>



+           (match_operand:<VSUBEL> 4 "register_operand"   "  r")))



+       (match_operand:V_VLSI_D 3 "register_operand"           " vr")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vms%B2.vx\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_eqne<mode>_extended_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"          "   rK,   rK")



+      (match_operand 7 "const_int_operand"              "    i,    i")



+      (match_operand 8 "const_int_operand"              "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSI_D



+         (sign_extend:<VEL>



+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))



+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSI_D



+         (sign_extend:<VEL>



+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))



+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vms%B3.vx\t%0,%4,%5%p1"



+  [(set_attr "type" "vicmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "signed_order_operator"



+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")



+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vmf%B3.vv\t%0,%4,%5%p1"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"          "  0")



+      (match_operand 5 "vector_length_operand"          " rK")



+      (match_operand 6 "const_int_operand"              "  i")



+      (match_operand 7 "const_int_operand"              "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "signed_order_operator"



+      [(match_operand:V_VLSF 3 "register_operand"           " vr")



+       (match_operand:V_VLSF 4 "register_operand"           " vr")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vmf%B2.vv\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_cmp<mode>_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "signed_order_operator"



+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")



+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vmf%B3.vv\t%0,%4,%5%p1"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"         "  0")



+      (match_operand 5 "vector_length_operand"         " rK")



+      (match_operand 6 "const_int_operand"             "  i")



+      (match_operand 7 "const_int_operand"             "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "signed_order_operator"



+      [(match_operand:V_VLSF 3 "register_operand"      " vr")



+       (vec_duplicate:V_VLSF



+         (match_operand:<VEL> 4 "register_operand"     "  f"))])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vmf%B2.vf\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_cmp<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "signed_order_operator"



+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")



+       (vec_duplicate:V_VLSF



+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vmf%B3.vf\t%0,%4,%5%p1"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_cmp<mode>_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "signed_order_operator"



+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")



+       (vec_duplicate:V_VLSF



+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vmf%B3.vf\t%0,%4,%5%p1"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")])



+



+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"



+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "register_operand"         "  0")



+      (match_operand 5 "vector_length_operand"         " rK")



+      (match_operand 6 "const_int_operand"             "  i")



+      (match_operand 7 "const_int_operand"             "  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 2 "equality_operator"



+      [(vec_duplicate:V_VLSF



+         (match_operand:<VEL> 4 "register_operand"     "  f"))



+       (match_operand:V_VLSF 3 "register_operand"      " vr")])



+   (match_dup 1)))]



+  "TARGET_XTHEADVECTOR"



+  "vmf%B2.vf\t%0,%3,%4,v0.t"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")



+   (set_attr "merge_op_idx" "1")



+   (set_attr "vl_op_idx" "5")



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))])



+



+;; We don't use early-clobber for LMUL <= 1 to get better codegen.



+(define_insn "*pred_th_eqne<mode>_scalar"



+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSF



+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))



+       (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"



+  "vmf%B3.vf\t%0,%4,%5%p1"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")])



+



+;; We use early-clobber for source LMUL > dest LMUL.



+(define_insn "*pred_th_eqne<mode>_scalar_narrow"



+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")



+ (if_then_else:<VM>



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")



+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")



+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)



+   (match_operator:<VM> 3 "equality_operator"



+      [(vec_duplicate:V_VLSF



+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))



+       (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])



+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]



+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"



+  "vmf%B3.vf\t%0,%4,%5%p1"



+  [(set_attr "type" "vfcmp")



+   (set_attr "mode" "<MODE>")])



\ No newline at end of file



diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md



index 5f5f7b5b986..c0fc7a2441d 100644



--- a/gcc/config/riscv/vector-iterators.md



+++ b/gcc/config/riscv/vector-iterators.md



@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [



])



(define_mode_iterator VI [



-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")



   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")



@@ -128,11 +128,11 @@ (define_mode_iterator VI [



;; allow the instruction and mode to be matched during combine et al.



(define_mode_iterator VF [



   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")



-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")



-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")



   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")



@@ -140,11 +140,11 @@ (define_mode_iterator VF [



(define_mode_iterator VF_ZVFHMIN [



   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")



   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")



@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [



])



(define_mode_iterator VEEWEXT2 [



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")



   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")



@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [



])



(define_mode_iterator VEEWEXT4 [



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")



   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")



@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [



])



(define_mode_iterator VEEWTRUNC2 [



-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



   (RVVM4SI "TARGET_64BIT")



   (RVVM2SI "TARGET_64BIT")



   (RVVM1SI "TARGET_64BIT")



-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")



+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")



   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")



   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



])



(define_mode_iterator VEEWTRUNC4 [



-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM2HI "TARGET_64BIT")



   (RVVM1HI "TARGET_64BIT")



-  (RVVMF2HI "TARGET_64BIT")



-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")



+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")



+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")



   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")



-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")



-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



])



(define_mode_iterator VEEWTRUNC8 [



   (RVVM1QI "TARGET_64BIT")



-  (RVVMF2QI "TARGET_64BIT")



-  (RVVMF4QI "TARGET_64BIT")



-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")



+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")



+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")



+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")



])



(define_mode_iterator VEI16 [



-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")



-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")



   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")



@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [



])



(define_mode_iterator VFULLI [



-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")



@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [



])



(define_mode_iterator VI_QH [



-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



])



(define_mode_iterator VI_QHS [



-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")



   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")



@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [



])



(define_mode_iterator VI_QHS_NO_M8 [



-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")



   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")



@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [



(define_mode_iterator VF_HS [



   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")



-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")



-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")



   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")



@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [



   (RVVM4HF "TARGET_ZVFH")



   (RVVM2HF "TARGET_ZVFH")



   (RVVM1HF "TARGET_ZVFH")



-  (RVVMF2HF "TARGET_ZVFH")



-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")



+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")



   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")



   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")



   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")



@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [



])



(define_mode_iterator V_VLSI_QHS [



-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")



   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")



@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [



;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or



;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.



(define_mode_iterator RATIO64 [



-  (RVVMF8QI "TARGET_MIN_VLEN > 32")



-  (RVVMF4HI "TARGET_MIN_VLEN > 32")



-  (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM1DI "TARGET_VECTOR_ELEN_64")



-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")



])



(define_mode_iterator RATIO32 [



-  RVVMF4QI



-  RVVMF2HI



+  (RVVMF4QI "!TARGET_XTHEADVECTOR")



+  (RVVMF2HI "!TARGET_XTHEADVECTOR")



   RVVM1SI



   (RVVM2DI "TARGET_VECTOR_ELEN_64")



-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")



+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")



   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")



   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")



])



(define_mode_iterator RATIO16 [



-  RVVMF2QI



+  (RVVMF2QI "!TARGET_XTHEADVECTOR")



   RVVM1HI



   RVVM2SI



   (RVVM4DI "TARGET_VECTOR_ELEN_64")



@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [



])



(define_mode_iterator RATIO64I [



-  (RVVMF8QI "TARGET_MIN_VLEN > 32")



-  (RVVMF4HI "TARGET_MIN_VLEN > 32")



-  (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")



])



(define_mode_iterator RATIO32I [



-  RVVMF4QI



-  RVVMF2HI



+  (RVVMF4QI "!TARGET_XTHEADVECTOR")



+  (RVVMF2HI "!TARGET_XTHEADVECTOR")



   RVVM1SI



   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")



])



(define_mode_iterator RATIO16I [



-  RVVMF2QI



+  (RVVMF2QI "!TARGET_XTHEADVECTOR")



   RVVM1HI



   RVVM2SI



   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")



@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [



])



(define_mode_iterator V_FRACT [



-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")



-  (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



])



(define_mode_iterator VWEXTI [



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")



   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")



@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [



   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")



   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")



   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")



   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")



@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [



   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")



   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")



   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")



-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")



   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")



@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [



(define_mode_iterator VWCONVERTI [



   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")



-  (RVVMF2SI "TARGET_ZVFH")



+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")



   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")



   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")



@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [



])



(define_mode_iterator VQEXTI [



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")



   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")



@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [



;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].



(define_mode_iterator VINDEXED [



-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")



+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")



+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")



+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")



   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")



   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")



@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [



   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")



   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")



-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")



-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")



+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")



   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")



   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")



@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [



(define_mode_iterator V_VLS_F_CONVERT_SI [



   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")



-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")



+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")



   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")



   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")



   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")



@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [



])



(define_mode_iterator V_VLS_F_CONVERT_DI [



-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")



-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")



+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")



+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")



   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")



   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")



-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")



   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")



   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")



diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md



index 036b2425f32..9941651341d 100644



--- a/gcc/config/riscv/vector.md



+++ b/gcc/config/riscv/vector.md



@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"



;; check. However, we need default value of SEW for vsetvl instruction since there



;; is no field for ratio in the vsetvl instruction encoding.



(define_attr "sew" ""



-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\



+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\



  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\



  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\



  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\



@@ -95,6 +95,18 @@ (define_attr "sew" ""



  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\



  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")



(const_int 8)



+ (eq_attr "mode" "RVVMF16BI")



+    (if_then_else (match_test "TARGET_XTHEADVECTOR")



+      (const_int 16)



+      (const_int 8))



+ (eq_attr "mode" "RVVMF32BI")



+    (if_then_else (match_test "TARGET_XTHEADVECTOR")



+      (const_int 32)



+      (const_int 8))



+ (eq_attr "mode" "RVVMF64BI")



+    (if_then_else (match_test "TARGET_XTHEADVECTOR")



+      (const_int 64)



+      (const_int 8))



(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\



  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\



  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\



@@ -155,9 +167,9 @@ (define_attr "vlmul" ""



(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")



(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")



(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")



- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")



- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")



- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")



+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")



+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")



+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")



(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")



(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")



(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")



@@ -428,6 +440,10 @@ (define_attr "ratio" ""



  vislide1up,vislide1down,vfslide1up,vfslide1down,\



  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")



   (const_int INVALID_ATTRIBUTE)



+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\



+        vlsegdff,vssegtux,vlsegdox,vlsegdux")



+       (match_test "TARGET_XTHEADVECTOR"))



+    (const_int INVALID_ATTRIBUTE)



(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)



(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)



(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)



@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""



(symbol_ref "riscv_vector::FRM_DYN")]



(symbol_ref "riscv_vector::FRM_NONE")))



+(include "thead-vector.md")



+



;; -----------------------------------------------------------------



;; ---- Miscellaneous Operations



;; -----------------------------------------------------------------



@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"



(define_insn "*mov<mode>_whole"



   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")



(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]



-  "TARGET_VECTOR"



+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"



   "@



    vl%m1re<sew>.v\t%0,%1



    vs%m1r.v\t%1,%0



@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"



(define_insn "*mov<mode>"



   [(set (match_operand:VB 0 "register_operand" "=vr")



(match_operand:VB 1 "register_operand" " vr"))]



-  "TARGET_VECTOR"



+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"



   "vmv1r.v\t%0,%1"



   [(set_attr "type" "vmov")



    (set_attr "mode" "<MODE>")])



@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"



  (any_extend:VWEXTI



    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))



  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]



-  "TARGET_VECTOR"



+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"



   "v<sz>ext.vf2\t%0,%3%p1"



   [(set_attr "type" "vext")



    (set_attr "mode" "<MODE>")



@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"



  (any_extend:VQEXTI



    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))



  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]



-  "TARGET_VECTOR"



+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"



   "v<sz>ext.vf4\t%0,%3%p1"



   [(set_attr "type" "vext")



    (set_attr "mode" "<MODE>")



@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"



  (any_extend:VOEXTI



    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))



  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]



-  "TARGET_VECTOR"



+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"



   "v<sz>ext.vf8\t%0,%3%p1"



   [(set_attr "type" "vext")



    (set_attr "mode" "<MODE>")



diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c



index 2e0e12aa045..2eef9e1e1a8 100644



--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c



+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c



@@ -1,4 +1,4 @@



-/* { dg-do compile } */



+/* { dg-do compile { target { ! riscv_xtheadvector } } } */



/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */



void foo0 () {__rvv_bool64_t t;}



diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c



index 3d81b179235..ef329e30785 100644



--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c



+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c



@@ -1,4 +1,4 @@



/* { dg-do compile } */



/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */



-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */



+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */



diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp



index 7f13ff0ca56..70df6b1401c 100644



--- a/gcc/testsuite/lib/target-supports.exp



+++ b/gcc/testsuite/lib/target-supports.exp



@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {



     }]



}



+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.



+# Cache the result.



+



+proc check_effective_target_riscv_xtheadvector { } {



+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {



+       #ifndef __riscv_xtheadvector



+       #error "Not __riscv_xtheadvector"



+       #endif



+    }]



+}



+



+



# Return 1 if we can execute code when using dg-add-options riscv_v



proc check_effective_target_riscv_v_ok { } {



--



2.17.1



 



 





^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 14:27         ` 钟居哲
@ 2023-12-20 14:41           ` joshua
  2023-12-20 14:48             ` 回复:[PATCH " 钟居哲
  2023-12-20 14:55             ` 钟居哲
  0 siblings, 2 replies; 69+ messages in thread
From: joshua @ 2023-12-20 14:41 UTC (permalink / raw)
  To: 钟居哲, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 236300 bytes --]

Hi Juzhe,
Yes, XTheadVector does not have vfneg.v as a pseudo instruction for vfsgnjn.vv.
We have listed all the differences between vector and xtheadvector in our spec. You may refer to it.
https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc <https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc >
https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd <https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd >
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月20日(星期三) 22:27
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
Why do you add this ?
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vf<insn>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
Theadvector is not th.vfneg.v ?
juzhe.zhong@rivai.ai
发件人: joshua
发送时间: 2023-12-20 22:24
收件人: 钟居哲; gcc-patches
抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu
主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
Hi Juzhe,
The patterns you supposed redundant are all necessary, because they generate different instructions from vector.
Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions, 
and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月20日(星期三) 22:00
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
Why do you add these ?
+(define_insn "@pred_th_unit_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 2 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vsseg<nf>e.v\t%2,(%z1)%p0"
+ [(set_attr "type" "vssegte")
+ (set_attr "mode" "<MODE>")])
These patterns are redundant just names are different.
They should be removed.
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua)
Date: 2023-12-20 20:34
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
This patch is to handle the differences in instruction generation
between Vector and XTheadVector, adding th. prefix
to all XTheadVector instructions is not included.
For some vector patterns that cannot be avoided, we use
!TARGET_XTHEADVECTOR to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
gcc/ChangeLog:
* config.gcc: Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/t-riscv: Add new files.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector-builtins-functions.def: New file.
* config/riscv/thead-vector-builtins.cc: New file.
* config/riscv/thead-vector-builtins.h: New file.
* config/riscv/thead-vector.md: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc | 4 +-
gcc/config/riscv/autovec.md | 2 +-
gcc/config/riscv/predicates.md | 8 +-
gcc/config/riscv/riscv-string.cc | 3 +
gcc/config/riscv/riscv-v.cc | 13 +-
.../riscv/riscv-vector-builtins-shapes.cc | 23 +
gcc/config/riscv/riscv-vector-builtins.cc | 7 +
gcc/config/riscv/riscv-vector-builtins.h | 5 +-
gcc/config/riscv/riscv-vector-switch.def | 150 +-
gcc/config/riscv/riscv.cc | 20 +-
gcc/config/riscv/riscv_th_vector.h | 49 +
gcc/config/riscv/t-riscv | 16 +
.../riscv/thead-vector-builtins-functions.def | 627 ++++
gcc/config/riscv/thead-vector-builtins.cc | 746 +++++
gcc/config/riscv/thead-vector-builtins.h | 92 +
gcc/config/riscv/thead-vector.md | 2574 +++++++++++++++++
gcc/config/riscv/vector-iterators.md | 186 +-
gcc/config/riscv/vector.md | 36 +-
.../gcc.target/riscv/rvv/base/abi-1.c | 2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
gcc/testsuite/lib/target-supports.exp | 12 +
21 files changed, 4386 insertions(+), 191 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,9 +547,9 @@ riscv*)
extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
- extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"
 [(match_operand 0 "register_operand")
 (match_operand 1 "memory_operand")
 (match_operand:ANYI 2 "const_int_operand")]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 {
 riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 1a3a4f1ecbb..d910367e59c 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@ (define_predicate "csr_operand"
 (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
- (ior (match_operand 0 "const_csr_operand")
- (match_operand 0 "register_operand")))
+ (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+ (match_operand 0 "const_csr_operand"))
+ (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates. This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
 (ior (match_operand 0 "pmode_register_operand")
- (match_operand 0 "const_csr_operand")))
+ (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+ (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
 (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop # Any more?
ret # Return
 */
+ if (TARGET_XTHEADVECTOR)
+ return false;
+
 gcc_assert (TARGET_VECTOR);
 HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 486f5deb296..710332e17db 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)
 return true;
 }
+ if (TARGET_XTHEADVECTOR)
+ {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+ RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+ }
+
 if (riscv_v_ext_vls_mode_p (mode))
 {
 if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()
 compiler pick up either agnostic or undisturbed. Maybe we
 will have a compile option like -mprefer=agnostic to set
 this value???. */
- return TAIL_ANY;
+ return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy. */
@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()
 compiler pick up either agnostic or undisturbed. Maybe we
 will have a compile option like -mprefer=agnostic to set
 this value???. */
- return MASK_ANY;
+ return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx. */
@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
- if (!TARGET_VECTOR)
+ if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
 return false;
 if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+ valid for the function. */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+ tree arg;
+ unsigned i;
+
+ if (!return_type)
+ return false;
+
+ FOR_EACH_VEC_ELT (argument_types, i, arg)
+ if (!arg)
+ return false;
+
+ return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
 mode suffix at index PAIR && bi and predication suffix at index pred_idx. */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
 group.ops_infos.types[vec_type_idx].index);
 b.allocate_argument_types (function_instance, argument_types);
 b.apply_predication (function_instance, return_type, argument_types);
+
+ if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+ return;
+
 b.add_overloaded_function (function_instance, *group.shape);
 b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..f5f9000d89c 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
#include "riscv-vector-builtins.h"
#include "riscv-vector-builtins-shapes.h"
#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
using namespace riscv_vector;
@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {
#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \
 {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
#include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \
+ {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO) \
+ {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
};
/* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..bb463510dd2 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
 ZVKNHB_EXT, /* Crypto vector Zvknhb sub-ext */
 ZVKSED_EXT, /* Crypto vector Zvksed sub-ext */
 ZVKSH_EXT, /* Crypto vector Zvksh sub-ext */
+ XTHEADVECTOR_EXT, /* XTheadVector extension */
};
/* Enumerates the RVV operand types. */
@@ -233,7 +234,7 @@ struct function_group_info
 switch (ext_value)
 {
 case VECTOR_EXT:
- return TARGET_VECTOR;
+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);
 case ZVBB_EXT:
 return TARGET_ZVBB;
 case ZVBB_OR_ZVKB_EXT:
@@ -252,6 +253,8 @@ struct function_group_info
 return TARGET_ZVKSED;
 case ZVKSH_EXT:
 return TARGET_ZVKSH;
+ case XTHEADVECTOR_EXT:
+ return TARGET_XTHEADVECTOR;
 default:
 gcc_unreachable ();
 }
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32. */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64. */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index d3010bed8d8..18cc64b63e6 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
 if (riscv_v_ext_vector_mode_p (mode))
 {
+ if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
 poly_int64 nunits = GET_MODE_NUNITS (mode);
 poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
- if (TARGET_VECTOR)
+ if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
 return riscv_vector::preferred_simd_mode (mode);
 return word_mode;
@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
- if (TARGET_VECTOR)
+ if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
 return riscv_vector::autovectorize_vector_modes (modes, all);
 return default_autovectorize_vector_modes (modes, all);
@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 return false;
}
+/* Implements target hook vector_mode_supported_any_target_p. */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+ if (TARGET_XTHEADVECTOR)
+ return false;
+ return true;
+}
+
/* Initialize the GCC target structure. */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published
+ by the Free Software Foundation; either version 3, or (at your
+ option) any later version.
+
+ GCC is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+ License for more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short. It does
+ not define the RVV types and intrinsic functions directly in C and C++
+ code, but instead uses the following pragma to tell GCC to insert the
+ necessary type and function definitions itself. The net effect is the
+ same, and the file is a complete implementation of riscv_th_vector.h. */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
 $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
 $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
 $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+ $(srcdir)/config/riscv/thead-vector-builtins.h \
+ $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
 $(RISCV_BUILTINS_H)
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
+thead-vector-builtins.o: \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc \
+ $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+ $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+ $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+ gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+ rtx-vector-builder.h \
+ $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+ $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+ $(srcdir)/config/riscv/thead-vector-builtins.h \
+ $(RISCV_BUILTINS_H)
+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc
+
riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
 $(SYSTEM_H) $(TM_H)
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..a85ca24cb31
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,627 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#ifndef DEF_THEAD_RVV_FUNCTION
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+/* Internal helper functions for gimple fold use. */
+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)
+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)
+
+/* 6. Configuration-Setting Instructions. */
+
+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)
+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+
+/* 7. Vector Loads and Stores. */
+
+// 7.4. Vector Unit-Stride Instructions
+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+
+// 7.5. Vector Strided Instructions
+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
+
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+
+// 7.7. Unit-stride Fault-Only-First Loads
+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)
+
+// TODO: 7.8. Vector Load/Store Segment Instructions
+
+/* 11. Vector Integer Arithmetic Instructions. */
+
+// 11.1. Vector Single-Width Integer Add and Subtract
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)
+
+// 11.2. Vector Widening Integer Add/Subtract
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)
+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)
+
+// 11.3. Vector Integer Extension
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)
+
+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)
+
+// 11.5. Vector Bitwise Logical Instructions
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)
+
+// 11.6. Vector Single-Width Shift Instructions
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)
+
+// 11.7. Vector Narrowing Integer Right Shift Instructions
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)
+
+// 11.8. Vector Integer Compare Instructions
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)
+
+// 11.9. Vector Integer Min/Max Instructions
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)
+
+// 11.10. Vector Single-Width Integer Multiply Instructions
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)
+
+// 11.11. Vector Integer Divide Instructions
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)
+
+// 11.12. Vector Widening Integer Multiply Instructions
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)
+
+// 11.13. Vector Single-Width Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)
+
+// 11.14. Vector Widening Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)
+
+// 11.15. Vector Integer Merge Instructions
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+
+// 11.16 Vector Integer Move Instructions
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)
+
+/* 12. Vector Fixed-Point Arithmetic Instructions. */
+
+// 12.1. Vector Single-Width Saturating Add and Subtract
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)
+
+// 12.2. Vector Single-Width Averaging Add and Subtract
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)
+
+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)
+
+// 12.4. Vector Single-Width Scaling Shift Instructions
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)
+
+// 12.5. Vector Narrowing Fixed-Point Clip Instructions
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+
+/* 13. Vector Floating-Point Instructions. */
+
+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)
+
+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.5. Vector Widening Floating-Point Multiply
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)
+
+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)
+
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+
+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)
+
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+
+// 13.8. Vector Floating-Point Square-Root Instruction
+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)
+
+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.11. Vector Floating-Point MIN/MAX Instructions
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)
+
+// 13.12. Vector Floating-Point Sign-Injection Instructions
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)
+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)
+
+// 13.13. Vector Floating-Point Compare Instructions
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)
+
+// 13.14. Vector Floating-Point Classify Instruction
+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)
+
+// 13.15. Vector Floating-Point Merge Instruction
+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)
+
+// 13.16. Vector Floating-Point Move Instruction
+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)
+
+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)
+
+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)
+
+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)
+
+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)
+
+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)
+
+/* 14. Vector Reduction Operations. */
+
+// 14.1. Vector Single-Width Integer Reduction Instructions
+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)
+
+// 14.2. Vector Widening Integer Reduction Instructions
+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)
+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)
+
+// 14.3. Vector Single-Width Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+
+// 14.4. Vector Widening Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+
+/* 15. Vector Mask Instructions. */
+
+// 15.1. Vector Mask-Register Logical Instructions
+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)
+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)
+// 15.2. Vector count population in mask vcpop.m
+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)
+// 15.3. vfirst find-first-set mask bit
+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)
+// 15.4. vmsbf.m set-before-first mask bit
+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.5. vmsif.m set-including-first mask bit
+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.6. vmsof.m set-only-first mask bit
+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.8. Vector Iota Instruction
+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)
+// 15.9. Vector Element Index Instruction
+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)
+
+/* 16. Vector Permutation Instructions. */
+
+// 16.1. Integer Scalar Move Instructions
+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)
+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)
+
+// 16.2. Floating-Point Scalar Move Instructions
+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)
+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)
+
+// 16.3. Vector Slide Instructions
+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)
+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)
+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)
+
+// 16.4. Vector Register Gather Instructions
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)
+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)
+
+// 16.5. Vector Compress Instruction
+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)
+
+/* Miscellaneous Vector Functions. */
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)
+
+// Tuple types
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)
+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
+#undef DEF_THEAD_RVV_FUNCTION
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..9d84ed39937
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,746 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+ for GNU compiler.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+ Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+ Semiconductor Co., Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements vsetvl<mode> && vsetvlmax<mode>. */
+template<bool VLMAX_P>
+class th_vsetvl : public function_base
+{
+public:
+ bool apply_vl_p () const override
+ {
+ return false;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (VLMAX_P)
+ e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));
+ else
+ e.add_input_operand (0);
+
+ tree type = builtin_types[e.type.index].vector;
+ machine_mode mode = TYPE_MODE (type);
+
+ machine_mode inner_mode = GET_MODE_INNER (mode);
+ /* SEW. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));
+
+ /* LMUL. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_vlmul (mode), Pmode));
+
+ /* TAIL_ANY. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_tail_policy (), Pmode));
+
+ /* MASK_ANY. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_mask_policy (), Pmode));
+ return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+ }
+};
+
+/* Implements
+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v
+ * codegen. */
+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>
+class th_loadstore : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return !STORE_P; }
+ bool apply_mask_policy_p () const override { return !STORE_P; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ if (STORE_P)
+ return CP_WRITE_MEMORY;
+ else
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ if (STORE_P || LST_TYPE == LST_INDEXED)
+ return true;
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (LST_TYPE == LST_INDEXED)
+ {
+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;
+ if (STORE_P)
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_store (unspec, e.vector_mode (),
+ e.index_mode ()));
+ else
+ {
+ unsigned src_eew_bitsize
+ = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));
+ unsigned dst_eew_bitsize
+ = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));
+ if (dst_eew_bitsize == src_eew_bitsize)
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_same_eew (
+ unspec, e.vector_mode ()));
+ }
+ else if (dst_eew_bitsize > src_eew_bitsize)
+ {
+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;
+ switch (factor)
+ {
+ case 2:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x2_greater_eew (
+ unspec, e.vector_mode ()));
+ case 4:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x4_greater_eew (
+ unspec, e.vector_mode ()));
+ case 8:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x8_greater_eew (
+ unspec, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;
+ switch (factor)
+ {
+ case 2:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x2_smaller_eew (
+ unspec, e.vector_mode ()));
+ case 4:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x4_smaller_eew (
+ unspec, e.vector_mode ()));
+ case 8:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x8_smaller_eew (
+ unspec, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+ }
+ }
+ else if (LST_TYPE == LST_STRIDED)
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_th_strided_store (e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_th_strided_load (e.vector_mode ()));
+ }
+ else
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_th_store (e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_mov (e.vector_mode ()));
+ }
+ }
+};
+
+/* Implements vneg/vnot. */
+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>
+class th_unop : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));
+ }
+};
+
+/* Implements vnsrl/vnsra. */
+template<rtx_code CODE>
+class th_vnshift : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_wx:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (CODE, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vncvt. */
+class th_vncvt_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+ }
+};
+
+/* Implements vnclip/vnclipu. */
+template<int UNSPEC>
+class th_vnclip : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override { return true; }
+
+ bool may_require_vxrm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_wx:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vcpop. */
+class th_vcpop : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_popcount (e.vector_mode (), Pmode));
+ }
+};
+
+/* Implements vfirst. */
+class th_vfirst : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_ffs (e.vector_mode (), Pmode));
+ }
+};
+
+/* Implements vmadc. */
+class th_vmadc : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));
+ case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_th_madc_overflow (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (
+ code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vmsbc. */
+class th_vmsbc : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));
+ case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_th_msbc_overflow (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (
+ code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vfncvt.x. */
+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_x : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+ }
+};
+
+template<enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_f : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_f_w)
+ return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_x_w)
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));
+ if (e.op_info->op == OP_TYPE_xu_w)
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements floating-point reduction instructions. */
+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_freducop : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ bool apply_mask_policy_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));
+ }
+};
+
+class th_vleff : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY | CP_WRITE_CSR;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ gimple *fold (gimple_folder &f) const override
+ {
+ return fold_fault_load (f);
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_contiguous_load_insn (
+ code_for_pred_th_fault_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vlseg.v. */
+class th_vlseg : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_unit_strided_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vsseg.v. */
+class th_vsseg : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_unit_strided_store (e.vector_mode ()));
+ }
+};
+
+/* Implements vlsseg.v. */
+class th_vlsseg : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_strided_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vssseg.v. */
+class th_vssseg : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_strided_store (e.vector_mode ()));
+ }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_load : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+ }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_store : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_store (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+ }
+};
+
+/* Implements vlsegff.v. */
+class th_vlsegff : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY | CP_WRITE_CSR;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ gimple *fold (gimple_folder &f) const override
+ {
+ return fold_fault_load (f);
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_fault_load (e.vector_mode ()));
+ }
+};
+
+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;
+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;
+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;
+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;
+static CONSTEXPR const th_unop<NEG> th_vneg_obj;
+static CONSTEXPR const th_unop<NOT> th_vnot_obj;
+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;
+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;
+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;
+static CONSTEXPR const th_vcpop th_vcpop_obj;
+static CONSTEXPR const th_vfirst th_vfirst_obj;
+static CONSTEXPR const th_vmadc th_vmadc_obj;
+static CONSTEXPR const th_vmsbc th_vmsbc_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;
+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;
+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;
+static CONSTEXPR const th_vleff th_vleff_obj;
+static CONSTEXPR const th_vlseg th_vlseg_obj;
+static CONSTEXPR const th_vsseg th_vsseg_obj;
+static CONSTEXPR const th_vlsseg th_vlsseg_obj;
+static CONSTEXPR const th_vssseg th_vssseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;
+static CONSTEXPR const th_vlsegff th_vlsegff_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+ of class <NAME>_obj. */
+#define BASE(NAME) \
+ namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vsetvl)
+BASE (th_vsetvlmax)
+BASE (th_vle)
+BASE (th_vse)
+BASE (th_vlm)
+BASE (th_vsm)
+BASE (th_vlse)
+BASE (th_vsse)
+BASE (th_vluxei8)
+BASE (th_vluxei16)
+BASE (th_vluxei32)
+BASE (th_vluxei64)
+BASE (th_vloxei8)
+BASE (th_vloxei16)
+BASE (th_vloxei32)
+BASE (th_vloxei64)
+BASE (th_vsuxei8)
+BASE (th_vsuxei16)
+BASE (th_vsuxei32)
+BASE (th_vsuxei64)
+BASE (th_vsoxei8)
+BASE (th_vsoxei16)
+BASE (th_vsoxei32)
+BASE (th_vsoxei64)
+BASE (th_vneg)
+BASE (th_vnot)
+BASE (th_vnsrl)
+BASE (th_vnsra)
+BASE (th_vncvt_x)
+BASE (th_vnclip)
+BASE (th_vnclipu)
+BASE (th_vcpop)
+BASE (th_vfirst)
+BASE (th_vmadc)
+BASE (th_vmsbc)
+BASE (th_vfncvt_x)
+BASE (th_vfncvt_x_frm)
+BASE (th_vfncvt_xu)
+BASE (th_vfncvt_xu_frm)
+BASE (th_vfncvt_f)
+BASE (th_vfncvt_f_frm)
+BASE (th_vfredusum)
+BASE (th_vfredusum_frm)
+BASE (th_vfredosum)
+BASE (th_vfredosum_frm)
+BASE (th_vfwredusum)
+BASE (th_vfwredusum_frm)
+BASE (th_vfwredosum)
+BASE (th_vfwredosum_frm)
+BASE (th_vleff)
+BASE (th_vlseg)
+BASE (th_vsseg)
+BASE (th_vlsseg)
+BASE (th_vssseg)
+BASE (th_vluxseg)
+BASE (th_vloxseg)
+BASE (th_vsuxseg)
+BASE (th_vsoxseg)
+BASE (th_vlsegff)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..d0bf00b8e81
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,92 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+ for GNU compiler.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+ Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+ Semiconductor Co., Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vsetvl;
+extern const function_base *const th_vsetvlmax;
+extern const function_base *const th_vle;
+extern const function_base *const th_vse;
+extern const function_base *const th_vlm;
+extern const function_base *const th_vsm;
+extern const function_base *const th_vlse;
+extern const function_base *const th_vsse;
+extern const function_base *const th_vluxei8;
+extern const function_base *const th_vluxei16;
+extern const function_base *const th_vluxei32;
+extern const function_base *const th_vluxei64;
+extern const function_base *const th_vloxei8;
+extern const function_base *const th_vloxei16;
+extern const function_base *const th_vloxei32;
+extern const function_base *const th_vloxei64;
+extern const function_base *const th_vsuxei8;
+extern const function_base *const th_vsuxei16;
+extern const function_base *const th_vsuxei32;
+extern const function_base *const th_vsuxei64;
+extern const function_base *const th_vsoxei8;
+extern const function_base *const th_vsoxei16;
+extern const function_base *const th_vsoxei32;
+extern const function_base *const th_vsoxei64;
+extern const function_base *const th_vneg;
+extern const function_base *const th_vnot;
+extern const function_base *const th_vnsrl;
+extern const function_base *const th_vnsra;
+extern const function_base *const th_vncvt_x;
+extern const function_base *const th_vnclip;
+extern const function_base *const th_vnclipu;
+extern const function_base *const th_vcpop;
+extern const function_base *const th_vfirst;
+extern const function_base *const th_vmadc;
+extern const function_base *const th_vmsbc;
+extern const function_base *const th_vfncvt_x;
+extern const function_base *const th_vfncvt_x_frm;
+extern const function_base *const th_vfncvt_xu;
+extern const function_base *const th_vfncvt_xu_frm;
+extern const function_base *const th_vfncvt_f;
+extern const function_base *const th_vfncvt_f_frm;
+extern const function_base *const th_vfredusum;
+extern const function_base *const th_vfredusum_frm;
+extern const function_base *const th_vfredosum;
+extern const function_base *const th_vfredosum_frm;
+extern const function_base *const th_vfwredusum;
+extern const function_base *const th_vfwredusum_frm;
+extern const function_base *const th_vfwredosum;
+extern const function_base *const th_vfwredosum_frm;
+extern const function_base *const th_vleff;
+extern const function_base *const th_vlseg;
+extern const function_base *const th_vsseg;
+extern const function_base *const th_vlsseg;
+extern const function_base *const th_vssseg;
+extern const function_base *const th_vluxseg;
+extern const function_base *const th_vloxseg;
+extern const function_base *const th_vsuxseg;
+extern const function_base *const th_vsoxseg;
+extern const function_base *const th_vlsegff;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..072fb5e68e1
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,2574 @@
+(define_c_enum "unspec" [
+ UNSPEC_TH_VWLDST
+])
+
+(define_int_attr th_order [
+ (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")
+])
+
+(define_int_attr th_reduc_op [
+ (UNSPEC_REDUC_SUM "redsum")
+ (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")
+ (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")
+ (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")
+ (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")
+ (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")
+])
+
+(define_code_iterator neg_unop [neg])
+(define_code_iterator not_unop [not])
+
+(define_code_iterator any_float_unop_neg [neg])
+(define_code_iterator any_float_unop_abs [abs])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+ [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+ "TARGET_XTHEADVECTOR"
+ [(const_int 0)]
+ {
+ emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+ RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+ DONE;
+ })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+ [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand" "=vr,vr, m")
+ (unspec:V_VLS_VT
+ [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+ (match_operand 2 "vector_length_operand" " rK, rK, rK")
+ (match_operand 3 "const_1_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.v\t%0,%1
+ vle.v\t%0,%1
+ vse.v\t%1,%0"
+ "&& REG_P (operands[0]) && REG_P (operands[1])
+ && REGNO (operands[0]) == REGNO (operands[1])"
+ [(const_int 0)]
+ ""
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")
+ (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+ (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+ (set (attr "avl_type_idx") (const_int 3))
+ (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+ [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr,vr, m")
+ (unspec:VB
+ [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+ (match_operand 2 "vector_length_operand" " rK, rK, rK")
+ (match_operand 3 "const_1_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.v\t%0,%1
+ vle.v\t%0,%1
+ vse.v\t%1,%0"
+ "&& REG_P (operands[0]) && REG_P (operands[1])
+ && REGNO (operands[0]) == REGNO (operands[1])"
+ [(const_int 0)]
+ ""
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")
+ (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+ (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+ (set (attr "avl_type_idx") (const_int 3))
+ (set_attr "vl_op_idx" "2")
+ (set (attr "sew") (const_int 8))
+ (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_expand "@pred_th_mov<mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V_VLS 3 "vector_move_operand")
+ (match_operand:V_VLS 2 "vector_merge_operand")))]
+ "TARGET_XTHEADVECTOR"
+ {})
+
+(define_insn_and_split "*pred_broadcast<mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vr, vr, vd, vd, vr, vr, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSI
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " r, r,Wdm,Wdm,Wdm,Wdm, r, r"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.x\t%0,%3
+ vmv.v.x\t%0,%3
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero
+ vlse.v\t%0,%3,zero
+ vmv.s.x\t%0,%3
+ vmv.s.x\t%0,%3"
+ "(register_operand (operands[3], <VEL>mode)
+ || CONST_POLY_INT_P (operands[3]))
+ && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
+ [(set (match_dup 0)
+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)
+ (match_dup 5) (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSI (match_dup 3))
+ (match_dup 2)))]
+ {
+ gcc_assert (can_create_pseudo_p ());
+ if (CONST_POLY_INT_P (operands[3]))
+ {
+ rtx tmp = gen_reg_rtx (<VEL>mode);
+ emit_move_insn (tmp, operands[3]);
+ operands[3] = tmp;
+ }
+ rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),
+ GET_MODE_ALIGNMENT (<VEL>mode));
+ m = validize_mem (m);
+ emit_move_insn (m, operands[3]);
+ m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));
+ operands[3] = m;
+
+ /* For SEW = 64 in RV32 system, we expand vmv.s.x:
+ andi a2,a2,1
+ vsetvl zero,a2,e64
+ vlse64.v */
+ if (satisfies_constraint_Wb1 (operands[1]))
+ {
+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);
+ operands[1] = CONSTM1_RTX (<VM>mode);
+ }
+ }
+ [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_broadcast<mode>"
+ [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand" "=vr, vr, vr, vr, vr, vr, vr, vr")
+ (if_then_else:V_VLSF_ZVFHMIN
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSF_ZVFHMIN
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " f, f,Wdm,Wdm,Wdm,Wdm, f, f"))
+ (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vfmv.v.f\t%0,%3
+ vfmv.v.f\t%0,%3
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero
+ vlse.v\t%0,%3,zero
+ vfmv.s.f\t%0,%3
+ vfmv.s.f\t%0,%3"
+ [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
+ (set_attr "mode" "<MODE>")])
+
+;; vle.v/vse.v,vmv.v.v
+(define_insn_and_split "*pred_th_mov<mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand" "=vr, vr, vd, m, vr, vr")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V_VLS 3 "reg_or_mem_operand" " m, m, m, vr, vr, vr")
+ (match_operand:V_VLS 2 "vector_merge_operand" " 0, vu, vu, vu, vu, 0")))]
+ "(TARGET_XTHEADVECTOR
+ && (register_operand (operands[0], <MODE>mode)
+ || register_operand (operands[3], <MODE>mode)))"
+ "@
+ vle.v\t%0,%3%p1
+ vle.v\t%0,%3
+ vle.v\t%0,%3,%1.t
+ vse.v\t%3,%0%p1
+ vmv.v.v\t%0,%3
+ vmv.v.v\t%0,%3"
+ "&& register_operand (operands[0], <MODE>mode)
+ && register_operand (operands[3], <MODE>mode)
+ && satisfies_constraint_vu (operands[2])
+ && INTVAL (operands[7]) == riscv_vector::VLMAX"
+ [(set (match_dup 0) (match_dup 3))]
+ ""
+ [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn_and_split "@pred_th_mov<mode>"
+ [(set (match_operand:VB_VLS 0 "nonimmediate_operand" "=vr, m, vr, vr, vr")
+ (if_then_else:VB_VLS
+ (unspec:VB_VLS
+ [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:VB_VLS 3 "vector_move_operand" " m, vr, vr, Wc0, Wc1")
+ (match_operand:VB_VLS 2 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ #
+ #
+ vmcpy.m\t%0,%3
+ vmclr.m\t%0
+ vmset.m\t%0"
+ "&& !reload_completed"
+ [(const_int 0)]
+ {
+ if ((MEM_P (operands[0]) || MEM_P (operands[3]))
+ || (REG_P (operands[0]) && REG_P (operands[3])
+ && INTVAL (operands[5]) == riscv_vector::VLMAX))
+ {
+ emit_move_insn (operands[0], operands[3]);
+ DONE;
+ }
+
+ FAIL;
+ }
+ [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_store<mode>"
+ [(set (match_operand:V 0 "memory_operand" "+m")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V 2 "register_operand" " vr")
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "vse.v\t%2,%0%p1"
+ [(set_attr "type" "vste")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 4))
+ (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_th_strided_load<mode>"
+ [(set (match_operand:V 0 "register_operand" "=vr, vr, vd, vr, vr, vd")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand:V 3 "memory_operand" " m, m, m, m, m, m")
+ (match_operand 4 "<V:stride_predicate>" "<V:stride_load_constraint>")] UNSPEC_STRIDED)
+ (match_operand:V 2 "vector_merge_operand" " 0, vu, vu, 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vlse.v\t%0,%3,%z4%p1
+ vlse.v\t%0,%3,%z4
+ vlse.v\t%0,%3,%z4,%1.t
+ vle.v\t%0,%3%p1
+ vle.v\t%0,%3
+ vle.v\t%0,%3,%1.t"
+ [(set_attr "type" "vlds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+ [(set (match_operand:V 0 "memory_operand" "+m, m")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand 2 "<V:stride_predicate>" "<V:stride_store_constraint>")
+ (match_operand:V 3 "register_operand" " vr, vr")] UNSPEC_STRIDED)
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vsse.v\t%3,%0,%z2%p1
+ vse.v\t%3,%0%p1"
+ [(set_attr "type" "vsts")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+
+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"
+ [(set (match_operand:V 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ,rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
+ (match_operand:V 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+;; DEST eew is greater than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"
+ [(set (match_operand:VEEWEXT2 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT2
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT2
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT2 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"
+ [(set (match_operand:VEEWEXT4 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT4
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT4
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT4 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"
+ [(set (match_operand:VEEWEXT8 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT8
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT8
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT8 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+;; DEST eew is smaller than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"
+ [(set (match_operand:VEEWTRUNC2 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC2
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC2
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC2 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"
+ [(set (match_operand:VEEWTRUNC4 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC4
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC4
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC4 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"
+ [(set (match_operand:VEEWTRUNC8 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC8
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC8
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC8 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:RATIO64 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO64:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:RATIO32 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO32:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:RATIO16 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO16:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:RATIO8 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO8:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:RATIO4 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO4:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:RATIO2 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO2:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO1 2 "register_operand" " vr")
+ (match_operand:RATIO1 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO1:MODE>")])
+
+(define_insn "@pred_th_popcount<VB:mode><P:mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (popcount:P
+ (unspec:VB
+ [(and:VB
+ (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+ (match_operand:VB 2 "register_operand" " vr"))
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
+ "TARGET_XTHEADVECTOR"
+ "vmpopc.m\t%0,%2%p1"
+ [(set_attr "type" "vmpop")
+ (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_ffs<VB:mode><P:mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (plus:P
+ (ffs:P
+ (unspec:VB
+ [(and:VB
+ (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+ (match_operand:VB 2 "register_operand" " vr"))
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
+ (const_int -1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmfirst.m\t%0,%2%p1"
+ [(set_attr "type" "vmffs")
+ (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VNCONVERT>
+ [(match_operand:V_VLSF 3 "register_operand" " vd, vd, vr, vr, vr, vr")] VFCVTS)
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.x<v_su>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftoi")
+ (set_attr "mode" "<VNCONVERT>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<float_cvt><mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float:<VNCONVERT>
+ (match_operand:VWCONVERTI 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.f.x<u>.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtitof")
+ (set_attr "mode" "<VNCONVERT>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<optab><mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd,&vd, &vr, &vr,&vd, &vr, &vr, &vr, vd, vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK,rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (any_shiftrt:VWEXTI
+ (match_operand:VWEXTI 3 "register_operand" " vr,vr, vr, vr, vd, vr, vr, vr, vd, vr, vr, vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " vd, vd, vr, vr,vr, vr, vr, vr, vk, vk, vk, vk")))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vd,vu, vr, vu,vu, vu, vu, vr, vu, vu, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vn<insn>.v%o4\t%0,%3,%v4%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_<optab><mode>_scalar"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (any_shiftrt:VWEXTI
+ (match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr")
+ (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vn<insn>.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnsrl.vx\t%0,%3,x0%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (float_truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTF_ZVFHMIN 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.f.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftof")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_fault_load<mode>"
+ [(set (match_operand:V 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand:V 3 "memory_operand" " m, m, m, m")] UNSPEC_VLEFF)
+ (match_operand:V 2 "vector_merge_operand" " vu, 0, vu, 0")))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI
+ [(if_then_else:V
+ (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V [(match_dup 3)] UNSPEC_VLEFF)
+ (match_dup 2))] UNSPEC_MODIFY_VL))]
+ "TARGET_XTHEADVECTOR"
+ "vleff.v\t%0,%3%p1"
+ [(set_attr "type" "vldff")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "vlseg<nf>e.v\t%0,(%z3)%p1"
+ [(set_attr "type" "vlsegde")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 2 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vsseg<nf>e.v\t%2,(%z1)%p0"
+ [(set_attr "type" "vssegte")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (match_operand 4 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_STRIDED)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"
+ [(set_attr "type" "vlsegds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand 2 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 3 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vssseg<nf>e.v\t%3,(%z1),%z2%p0"
+ [(set_attr "type" "vssegts")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_fault_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_VLEFF)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI
+ [(if_then_else:VT
+ (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
+ (match_dup 2))] UNSPEC_MODIFY_VL))]
+ "TARGET_XTHEADVECTOR"
+ "vlseg<nf>eff.v\t%0,(%z3)%p1"
+ [(set_attr "type" "vlsegdff")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"
+ [(set (match_operand:V1T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V1T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V1T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO64I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V1T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"
+ [(set (match_operand:V2T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V2T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V2T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO32I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V2T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"
+ [(set (match_operand:V4T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V4T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V4T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO16I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V4T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"
+ [(set (match_operand:V8T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V8T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V8T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO8I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V8T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"
+ [(set (match_operand:V16T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V16T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V16T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO4I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V16T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"
+ [(set (match_operand:V32T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V32T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V32T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO2I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V32T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:V1T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:V2T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:V4T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:V8T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:V16T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:V32T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop_neg:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vfsgnjn.vv\t%0,%3,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop_abs:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vfsgnjx.vv\t%0,%3,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (not_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vnot.v\t%0,%3%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vrsub.vx\t%0,%3,x0%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vf<insn>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd,&vd, &vr, &vr,&vd, &vr, &vr, &vr, &vd, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK,rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(match_operand:VWEXTI 3 "register_operand" " vr,vr, vr, vr, vd, vr, vr, vr, vd, vr, vr, vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " vd, vd, vr, vr,vr, vr, vr, vr, vk, vk, vk, vk")] VNCLIP)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vd,vu, vr, vu,vu, vu, vu, vr, vu, vu, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"
+ [(set_attr "type" "vnclip")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr")
+ (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")] VNCLIP)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnclip<v_su>.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vnclip")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; Float Reduction Sum (vfred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+ [(set (match_operand:<V_LMUL1> 0 "register_operand" "=vr,vr")
+ (unspec:<V_LMUL1>
+ [(unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_LMUL1> [
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr")
+ (match_operand:<V_LMUL1> 4 "register_operand" " vr, vr")
+ ] ANY_FREDUC_SUM)
+ (match_operand:<V_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
+ "TARGET_XTHEADVECTOR"
+ "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+ [(set_attr "type" "vfred<order>")
+ (set_attr "mode" "<MODE>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+ [(set (match_operand:<V_EXT_LMUL1> 0 "register_operand" "=&vr, &vr")
+ (unspec:<V_EXT_LMUL1>
+ [(unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_EXT_LMUL1> [
+ (match_operand:VF_HS 3 "register_operand" " vr, vr")
+ (match_operand:<V_EXT_LMUL1> 4 "register_operand" " vr0, vr0")
+ ] ANY_FWREDUC_SUM)
+ (match_operand:<V_EXT_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
+ "TARGET_XTHEADVECTOR"
+ "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwred<order>")
+ (set_attr "mode" "<MODE>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_madc<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr")
+ (unspec:<VM>
+ [(plus:VI
+ (match_operand:VI 1 "register_operand" " %vr, vr, vr")
+ (match_operand:VI 2 "vector_arith_operand" "vrvi, vr, vi"))
+ (match_operand:<VM> 3 "register_operand" " vm, vm, vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.v%o2m\t%0,%1,%v2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI
+ (match_operand:VI 1 "register_operand" " vr")
+ (match_operand:VI 2 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vvm\t%0,%1,%2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "register_operand" " r"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (match_operand:<VM> 3 "register_operand")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4], operands[5]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+ DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_madc<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (match_operand:<VM> 3 "register_operand")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4], operands[5]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+ DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_msbc<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_overflow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr")
+ (unspec:<VM>
+ [(plus:VI
+ (match_operand:VI 1 "register_operand" " %vr, vr, vr")
+ (match_operand:VI 2 "vector_arith_operand" "vrvi, vr, vi"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK, rK, rK")
+ (match_operand 4 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.v%o2\t%0,%1,%v2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI
+ (match_operand:VI 1 "register_operand" " vr")
+ (match_operand:VI 2 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vv\t%0,%1,%2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand")
+ (match_operand 4 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+ DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand")
+ (match_operand 4 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+ DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*th_vsetvl<mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")
+ (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+ (match_dup 2)
+ (match_dup 3)] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+ (match_dup 3)
+ (match_dup 4)
+ (match_dup 5)] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\t%0,%1,e%2,%m3"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "<MODE>")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "*th_vsetvl_vtype_change_only"
+ [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+ [(match_operand 0 "const_int_operand" "i")
+ (match_operand 1 "const_int_operand" "i")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\tzero,zero,e%0,%m1"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "SI")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "*th_vsetvl_discard_result<mode>"
+ [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+ (match_operand 1 "const_int_operand" "i")
+ (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+ (match_dup 2)
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\tzero,%0,e%1,%m2"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "<MODE>")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")
+ (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "#"
+ "&& epilogue_completed"
+ [(parallel
+ [(set (match_dup 0)
+ (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)
+ (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)
+ (match_dup 5)] UNSPEC_VSETVL))])]
+ ""
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "SI")])
+
+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 3 "register_operand" " vr")
+ (match_operand:V_VLSI 4 "vector_arith_operand" "vrvi")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_arith_operand" " vr, vr, vi, vi")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_arith_operand" " vrvi, vrvi, vr, vr, vrvi, vr, vr, vrvi, vrvi")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "ltge_operator"
+ [(match_operand:V_VLSI 3 "register_operand" " vr")
+ (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_ltge<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_neg_arith_operand" " vr, vr, vj, vj")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_ltge<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_neg_arith_operand" " vrvj, vrvj, vr, vr, vrvj, vr, vr, vrvj, vrvj")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r"))
+ (match_operand:V_VLSI_QHS 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r"))
+ (match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))
+ (match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 4 "register_operand" " r"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 4 "register_operand" " r"))
+ (match_operand:V_VLSI_D 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r"))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "register_operand" " r")))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r")))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "register_operand" " r")))
+ (match_operand:V_VLSI_D 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r")))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr")
+ (match_operand:V_VLSF 5 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "signed_order_operator"
+ [(match_operand:V_VLSF 3 "register_operand" " vr")
+ (match_operand:V_VLSF 4 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vv\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSF 5 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "signed_order_operator"
+ [(match_operand:V_VLSF 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 4 "register_operand" " f"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vf\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 4 "register_operand" " f"))
+ (match_operand:V_VLSF 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vf\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f"))
+ (match_operand:V_VLSF 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))
+ (match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [
])
(define_mode_iterator VI [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@ (define_mode_iterator VI [
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@ (define_mode_iterator VF [
(define_mode_iterator VF_ZVFHMIN [
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [
])
(define_mode_iterator VEEWEXT2 [
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [
])
(define_mode_iterator VEEWEXT4 [
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [
])
(define_mode_iterator VEEWTRUNC2 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 (RVVM4SI "TARGET_64BIT")
 (RVVM2SI "TARGET_64BIT")
 (RVVM1SI "TARGET_64BIT")
- (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
- RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM2HI "TARGET_64BIT")
 (RVVM1HI "TARGET_64BIT")
- (RVVMF2HI "TARGET_64BIT")
- (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
 (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
 (RVVM1QI "TARGET_64BIT")
- (RVVMF2QI "TARGET_64BIT")
- (RVVMF4QI "TARGET_64BIT")
- (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [
])
(define_mode_iterator VFULLI [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [
])
(define_mode_iterator VI_QH [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [
])
(define_mode_iterator VI_QHS_NO_M8 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [
(define_mode_iterator VF_HS [
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
 (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [
 (RVVM4HF "TARGET_ZVFH")
 (RVVM2HF "TARGET_ZVFH")
 (RVVM1HF "TARGET_ZVFH")
- (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
 (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [
])
(define_mode_iterator V_VLSI_QHS [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
- (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM1DI "TARGET_VECTOR_ELEN_64")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
- RVVMF4QI
- RVVMF2HI
+ (RVVMF4QI "!TARGET_XTHEADVECTOR")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR")
 RVVM1SI
 (RVVM2DI "TARGET_VECTOR_ELEN_64")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
- RVVMF2QI
+ (RVVMF2QI "!TARGET_XTHEADVECTOR")
 RVVM1HI
 RVVM2SI
 (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [
])
(define_mode_iterator RATIO64I [
- (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
- RVVMF4QI
- RVVMF2HI
+ (RVVMF4QI "!TARGET_XTHEADVECTOR")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR")
 RVVM1SI
 (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
- RVVMF2QI
+ (RVVMF2QI "!TARGET_XTHEADVECTOR")
 RVVM1HI
 RVVM2SI
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [
])
(define_mode_iterator V_FRACT [
- RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [
 (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [
(define_mode_iterator VWCONVERTI [
 (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
- (RVVMF2SI "TARGET_ZVFH")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [
])
(define_mode_iterator VQEXTI [
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [
 (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
 (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [
(define_mode_iterator V_VLS_F_CONVERT_SI [
 (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
- (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
- (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
- (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+ (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@ (define_attr "sew" ""
 V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 16)
+ (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 32)
+ (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 64)
+ (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@ (define_attr "vlmul" ""
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@ (define_attr "ratio" ""
 vislide1up,vislide1down,vfslide1up,vfslide1down,\
 vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+ vlsegdff,vssegtux,vlsegdox,vlsegdux")
+ (match_test "TARGET_XTHEADVECTOR"))
+ (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"
(define_insn "*mov<mode>_whole"
 [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" " m,vr,vr"))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "@
 vl%m1re<sew>.v\t%0,%1
 vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"
(define_insn "*mov<mode>"
 [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "vmv1r.v\t%0,%1"
 [(set_attr "type" "vmov")
 (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"
 (any_extend:VWEXTI
 (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
 (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf2\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"
 (any_extend:VQEXTI
 (match_operand:<V_QUAD_TRUNC> 3 "register_operand" "W43,W43,W43,W43,W86,W86,W86,W86, vr, vr"))
 (match_operand:VQEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf4\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"
 (any_extend:VOEXTI
 (match_operand:<V_OCT_TRUNC> 3 "register_operand" "W87,W87,W87,W87, vr, vr"))
 (match_operand:VOEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf8\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
 }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+ return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+ #ifndef __riscv_xtheadvector
+ #error "Not __riscv_xtheadvector"
+ #endif
+ }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
--
2.17.1

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 14:41           ` 回复:回复:[PATCH " joshua
@ 2023-12-20 14:48             ` 钟居哲
  2023-12-20 14:55             ` 钟居哲
  1 sibling, 0 replies; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 14:48 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu

No. I mean why you add vfsqrt.v pattern which is totally the same as current vfsqrt.v.








juzhe.zhong@rivai.ai



 



发件人: joshua



发送时间: 2023-12-20 22:41



收件人: 钟居哲; gcc-patches



抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu



主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector



Hi Juzhe,



Yes, XTheadVector does not have vfneg.v as a pseudo instruction for vfsgnjn.vv.



We have listed all the differences between vector and xtheadvector in our spec. You may refer to it.



https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc



https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd







Joshua



















------------------------------------------------------------------



发件人:钟居哲 <juzhe.zhong@rivai.ai>



发送时间:2023年12月20日(星期三) 22:27



收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>



抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>



主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









Why do you add this ?





+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")



+ (if_then_else:V_VLSF



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+   (any_float_unop:V_VLSF



+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))



+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vf<insn>.v\t%0,%3%p1"



+  [(set_attr "type" "<float_insn_type>")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])







Theadvector is not th.vfneg.v ?











juzhe.zhong@rivai.ai

















发件人: joshua









发送时间: 2023-12-20 22:24









收件人: 钟居哲; gcc-patches









抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu









主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









Hi Juzhe,

















The patterns you supposed redundant are all necessary, because they generate different instructions from vector.









Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions,









and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.

















Joshua

































































------------------------------------------------------------------









发件人:钟居哲 <juzhe.zhong@rivai.ai>









发送时间:2023年12月20日(星期三) 22:00









收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>









抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>









主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector

















+// 7.6. Vector Indexed Instructions









+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)

















Why do you add these ?

















+(define_insn "@pred_th_unit_strided_store<mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+       (match_operand 3 "vector_length_operand"    "   rK")









+       (match_operand 4 "const_int_operand"        "    i")









+       (reg:SI VL_REGNUM)









+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand:VT 2 "register_operand"         "   vr")









+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]









+  "TARGET_XTHEADVECTOR"









+  "vsseg<nf>e.v\t%2,(%z1)%p0"









+  [(set_attr "type" "vssegte")









+   (set_attr "mode" "<MODE>")])

















These patterns are redundant just names are different.









They should be removed.









juzhe.zhong@rivai.ai

















From: Jun Sha (Joshua)









Date: 2023-12-20 20:34









To: gcc-patches









CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu









Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









This patch is to handle the differences in instruction generation









between Vector and XTheadVector, adding th. prefix









to all XTheadVector instructions is not included.

















For some vector patterns that cannot be avoided, we use









!TARGET_XTHEADVECTOR to disable them in vector.md in order









not to generate instructions that xtheadvector does not support,









like vmv1r and vsext.vf2.

















gcc/ChangeLog:

















* config.gcc:  Add files for XTheadVector intrinsics.









* config/riscv/autovec.md: Guard XTheadVector.









* config/riscv/riscv-string.cc (expand_block_move):









Guard XTheadVector.









* config/riscv/riscv-v.cc (legitimize_move):









New expansion.









(get_prefer_tail_policy): Give specific value for tail.









(get_prefer_mask_policy): Give specific value for mask.









(vls_mode_valid_p): Avoid autovec.









* config/riscv/riscv-vector-builtins-shapes.cc (check_type):









(build_one): New function.









* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):









(DEF_THEAD_RVV_FUNCTION): Add new marcos.









(check_required_extensions):









(handle_pragma_vector):









* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):









(RVV_REQUIRE_XTHEADVECTOR):









Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.









(struct function_group_info):









* config/riscv/riscv-vector-switch.def (ENTRY):









Disable fractional mode for the XTheadVector extension.









(TUPLE_ENTRY): Likewise.









* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.









* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):









Guard XTheadVector.









(riscv_v_adjust_bytesize): Likewise.









(riscv_preferred_simd_mode): Likewsie.









(riscv_autovectorize_vector_modes): Likewise.









(riscv_vector_mode_supported_any_target_p): Likewise.









(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.









* config/riscv/t-riscv: Add new files.









* config/riscv/vector-iterators.md: Remove fractional LMUL.









* config/riscv/vector.md: Include thead-vector.md.









* config/riscv/riscv_th_vector.h: New file.









* config/riscv/thead-vector-builtins-functions.def: New file.









* config/riscv/thead-vector-builtins.cc: New file.









* config/riscv/thead-vector-builtins.h: New file.









* config/riscv/thead-vector.md: New file.

















gcc/testsuite/ChangeLog:

















* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.









* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.









* lib/target-supports.exp: Add target for XTheadVector.

















Co-authored-by: Jin Ma <jinma@linux.alibaba.com>









Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>









Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>









---









gcc/config.gcc                                |    4 +-









gcc/config/riscv/autovec.md                   |    2 +-









gcc/config/riscv/predicates.md                |    8 +-









gcc/config/riscv/riscv-string.cc              |    3 +









gcc/config/riscv/riscv-v.cc                   |   13 +-









.../riscv/riscv-vector-builtins-shapes.cc     |   23 +









gcc/config/riscv/riscv-vector-builtins.cc     |    7 +









gcc/config/riscv/riscv-vector-builtins.h      |    5 +-









gcc/config/riscv/riscv-vector-switch.def      |  150 +-









gcc/config/riscv/riscv.cc                     |   20 +-









gcc/config/riscv/riscv_th_vector.h            |   49 +









gcc/config/riscv/t-riscv                      |   16 +









.../riscv/thead-vector-builtins-functions.def |  627 ++++









gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++









gcc/config/riscv/thead-vector-builtins.h      |   92 +









gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++









gcc/config/riscv/vector-iterators.md          |  186 +-









gcc/config/riscv/vector.md                    |   36 +-









.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-









.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-









gcc/testsuite/lib/target-supports.exp         |   12 +









21 files changed, 4386 insertions(+), 191 deletions(-)









create mode 100644 gcc/config/riscv/riscv_th_vector.h









create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def









create mode 100644 gcc/config/riscv/thead-vector-builtins.cc









create mode 100644 gcc/config/riscv/thead-vector-builtins.h









create mode 100644 gcc/config/riscv/thead-vector.md

















diff --git a/gcc/config.gcc b/gcc/config.gcc









index f0676c830e8..4478395ab77 100644









--- a/gcc/config.gcc









+++ b/gcc/config.gcc









@@ -547,9 +547,9 @@ riscv*)









extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"









extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"









extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"









- extra_objs="${extra_objs} thead.o riscv-target-attr.o"









+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"









d_target_objs="riscv-d.o"









- extra_headers="riscv_vector.h"









+ extra_headers="riscv_vector.h riscv_th_vector.h"









target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"









target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"









;;









diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md









index 8b8a92f10a1..1fac56c7095 100644









--- a/gcc/config/riscv/autovec.md









+++ b/gcc/config/riscv/autovec.md









@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"









  [(match_operand      0 "register_operand")









    (match_operand      1 "memory_operand")









    (match_operand:ANYI 2 "const_int_operand")]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  {









    riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],









  operands[2]);









diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md









index 1a3a4f1ecbb..d910367e59c 100644









--- a/gcc/config/riscv/predicates.md









+++ b/gcc/config/riscv/predicates.md









@@ -64,8 +64,9 @@ (define_predicate "csr_operand"









        (match_operand 0 "register_operand")))









(define_predicate "vector_csr_operand"









-  (ior (match_operand 0 "const_csr_operand")









-       (match_operand 0 "register_operand")))









+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")









+      (match_operand 0 "const_csr_operand"))









+    (match_operand 0 "register_operand")))









;; V has 32-bit unsigned immediates.  This happens to be the same constraint as









;; the csr_operand, but it's not CSR related.









@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"









;; Predicates for the V extension.









(define_special_predicate "vector_length_operand"









  (ior (match_operand 0 "pmode_register_operand")









-       (match_operand 0 "const_csr_operand")))









+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")









+    (match_operand 0 "const_csr_operand"))))









(define_special_predicate "autovec_length_operand"









  (ior (match_operand 0 "pmode_register_operand")









diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc









index 11c1f74d0b3..ec8f3486fd8 100644









--- a/gcc/config/riscv/riscv-string.cc









+++ b/gcc/config/riscv/riscv-string.cc









@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)









bnez a2, loop                   # Any more?









ret                             # Return









  */









+   if (TARGET_XTHEADVECTOR)









+    return false;









+









  gcc_assert (TARGET_VECTOR);









  HOST_WIDE_INT potential_ew









diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc









index 486f5deb296..710332e17db 100644









--- a/gcc/config/riscv/riscv-v.cc









+++ b/gcc/config/riscv/riscv-v.cc









@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)









      return true;









    }









+  if (TARGET_XTHEADVECTOR)









+      {









+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,









+   RVV_VLMAX, GEN_INT(VLMAX)));









+ return true;









+      }









+









  if (riscv_v_ext_vls_mode_p (mode))









    {









      if (GET_MODE_NUNITS (mode).to_constant () <= 31)









@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()









      compiler pick up either agnostic or undisturbed. Maybe we









      will have a compile option like -mprefer=agnostic to set









      this value???.  */









-  return TAIL_ANY;









+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;









}









/* Get prefer mask policy.  */









@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()









      compiler pick up either agnostic or undisturbed. Maybe we









      will have a compile option like -mprefer=agnostic to set









      this value???.  */









-  return MASK_ANY;









+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;









}









/* Get avl_type rtx.  */









@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)









bool









vls_mode_valid_p (machine_mode vls_mode)









{









-  if (!TARGET_VECTOR)









+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)









    return false;









  if (riscv_autovec_preference == RVV_SCALABLE)









diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc









index 4a754e0228f..6b49404a1fa 100644









--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc









+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc









@@ -33,6 +33,25 @@









namespace riscv_vector {









+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are









+   valid for the function.  */









+









+static bool









+check_type (tree return_type, vec<tree> &argument_types)









+{









+  tree arg;









+  unsigned i;









+









+  if (!return_type)









+    return false;









+









+  FOR_EACH_VEC_ELT (argument_types, i, arg)









+    if (!arg)









+      return false;









+









+  return true;









+}









+









/* Add one function instance for GROUP, using operand suffix at index OI,









    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */









static void









@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,









    group.ops_infos.types[vec_type_idx].index);









  b.allocate_argument_types (function_instance, argument_types);









  b.apply_predication (function_instance, return_type, argument_types);









+









+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))









+    return;









+









  b.add_overloaded_function (function_instance, *group.shape);









  b.add_unique_function (function_instance, (*group.shape), return_type,









argument_types);









diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc









index 4e2c66c2de7..f5f9000d89c 100644









--- a/gcc/config/riscv/riscv-vector-builtins.cc









+++ b/gcc/config/riscv/riscv-vector-builtins.cc









@@ -51,6 +51,7 @@









#include "riscv-vector-builtins.h"









#include "riscv-vector-builtins-shapes.h"









#include "riscv-vector-builtins-bases.h"









+#include "thead-vector-builtins.h"









using namespace riscv_vector;









@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {









#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \









  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},









#include "riscv-vector-builtins-functions.def"









+#undef DEF_RVV_FUNCTION









+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \









+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},









+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \









+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},









+#include "thead-vector-builtins-functions.def"









};









/* The RVV types, with their built-in









diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h









index 4f38c09d73d..bb463510dd2 100644









--- a/gcc/config/riscv/riscv-vector-builtins.h









+++ b/gcc/config/riscv/riscv-vector-builtins.h









@@ -123,6 +123,7 @@ enum required_ext









  ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */









  ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */









  ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */









+  XTHEADVECTOR_EXT,   /* XTheadVector extension */









};









/* Enumerates the RVV operand types.  */









@@ -233,7 +234,7 @@ struct function_group_info









    switch (ext_value)









    {









      case VECTOR_EXT:









-        return TARGET_VECTOR;









+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);









      case ZVBB_EXT:









        return TARGET_ZVBB;









      case ZVBB_OR_ZVKB_EXT:









@@ -252,6 +253,8 @@ struct function_group_info









        return TARGET_ZVKSED;









      case ZVKSH_EXT:









        return TARGET_ZVKSH;









+      case XTHEADVECTOR_EXT:









+ return TARGET_XTHEADVECTOR;









      default:









        gcc_unreachable ();









    }









diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def









index 5c9f9bcbc3e..f7a66b34bae 100644









--- a/gcc/config/riscv/riscv-vector-switch.def









+++ b/gcc/config/riscv/riscv-vector-switch.def









@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.









#endif









/* Disable modes if TARGET_MIN_VLEN == 32.  */









-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)









-ENTRY (RVVMF32BI, true, LMUL_F4, 32)









-ENTRY (RVVMF16BI, true, LMUL_F2, 16)









+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)









+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)









+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)









ENTRY (RVVMF8BI, true, LMUL_1, 8)









ENTRY (RVVMF4BI, true, LMUL_2, 4)









ENTRY (RVVMF2BI, true, LMUL_4, 2)









@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)









ENTRY (RVVM4QI, true, LMUL_4, 2)









ENTRY (RVVM2QI, true, LMUL_2, 4)









ENTRY (RVVM1QI, true, LMUL_1, 8)









-ENTRY (RVVMF2QI, true, LMUL_F2, 16)









-ENTRY (RVVMF4QI, true, LMUL_F4, 32)









-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)









+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)









+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)









+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)









/* Disable modes if TARGET_MIN_VLEN == 32.  */









ENTRY (RVVM8HI, true, LMUL_8, 2)









ENTRY (RVVM4HI, true, LMUL_4, 4)









ENTRY (RVVM2HI, true, LMUL_2, 8)









ENTRY (RVVM1HI, true, LMUL_1, 16)









-ENTRY (RVVMF2HI, true, LMUL_F2, 32)









-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)









+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)









+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)









/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */









ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)









ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)









ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)









ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)









-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)









-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)









+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)









+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)









/* Disable modes if TARGET_MIN_VLEN == 32.  */









ENTRY (RVVM8SI, true, LMUL_8, 4)









ENTRY (RVVM4SI, true, LMUL_4, 8)









ENTRY (RVVM2SI, true, LMUL_2, 16)









ENTRY (RVVM1SI, true, LMUL_1, 32)









-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)









+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)









/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */









ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)









ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)









ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)









ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)









-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)









+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)









/* Disable modes if !TARGET_VECTOR_ELEN_64.  */









ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)









@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)









#endif









TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)









TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)









TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)









TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)









TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)









TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)









TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)









TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)









TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)









TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)









TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)









TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)









TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)









diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc









index d3010bed8d8..18cc64b63e6 100644









--- a/gcc/config/riscv/riscv.cc









+++ b/gcc/config/riscv/riscv.cc









@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)









{









  if (riscv_v_ext_vector_mode_p (mode))









    {









+      if (TARGET_XTHEADVECTOR)









+ return BYTES_PER_RISCV_VECTOR;









+









      poly_int64 nunits = GET_MODE_NUNITS (mode);









      poly_int64 mode_size = GET_MODE_SIZE (mode);









@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)









static machine_mode









riscv_preferred_simd_mode (scalar_mode mode)









{









-  if (TARGET_VECTOR)









+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)









    return riscv_vector::preferred_simd_mode (mode);









  return word_mode;









@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)









unsigned int









riscv_autovectorize_vector_modes (vector_modes *modes, bool all)









{









-  if (TARGET_VECTOR)









+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)









    return riscv_vector::autovectorize_vector_modes (modes, all);









  return default_autovectorize_vector_modes (modes, all);









@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)









  return false;









}









+/* Implements target hook vector_mode_supported_any_target_p.  */









+









+static bool









+riscv_vector_mode_supported_any_target_p (machine_mode mode)









+{









+  if (TARGET_XTHEADVECTOR)









+    return false;









+  return true;









+}









+









/* Initialize the GCC target structure.  */









#undef TARGET_ASM_ALIGNED_HI_OP









#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"









@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)









#undef TARGET_PREFERRED_ELSE_VALUE









#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value









+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P









+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p









+









struct gcc_target targetm = TARGET_INITIALIZER;









#include "gt-riscv.h"









diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h









new file mode 100644









index 00000000000..6f47e0c90a4









--- /dev/null









+++ b/gcc/config/riscv/riscv_th_vector.h









@@ -0,0 +1,49 @@









+/* RISC-V 'XTheadVector' Extension intrinsics include file.









+   Copyright (C) 2022-2023 Free Software Foundation, Inc.









+









+   This file is part of GCC.









+









+   GCC is free software; you can redistribute it and/or modify it









+   under the terms of the GNU General Public License as published









+   by the Free Software Foundation; either version 3, or (at your









+   option) any later version.









+









+   GCC is distributed in the hope that it will be useful, but WITHOUT









+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY









+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public









+   License for more details.









+









+   Under Section 7 of GPL version 3, you are granted additional









+   permissions described in the GCC Runtime Library Exception, version









+   3.1, as published by the Free Software Foundation.









+









+   You should have received a copy of the GNU General Public License and









+   a copy of the GCC Runtime Library Exception along with this program;









+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see









+   <http://www.gnu.org/licenses/>.  */









+









+#ifndef __RISCV_TH_VECTOR_H









+#define __RISCV_TH_VECTOR_H









+









+#include <stdint.h>









+#include <stddef.h>









+









+#ifndef __riscv_xtheadvector









+#error "XTheadVector intrinsics require the xtheadvector extension."









+#else









+#ifdef __cplusplus









+extern "C" {









+#endif









+









+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does









+   not define the RVV types and intrinsic functions directly in C and C++









+   code, but instead uses the following pragma to tell GCC to insert the









+   necessary type and function definitions itself.  The net effect is the









+   same, and the file is a complete implementation of riscv_th_vector.h.  */









+#pragma riscv intrinsic "vector"









+









+#ifdef __cplusplus









+}









+#endif // __cplusplus









+#endif // __riscv_xtheadvector









+#endif // __RISCV_TH_ECTOR_H









diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv









index 067771e3c97..09512092056 100644









--- a/gcc/config/riscv/t-riscv









+++ b/gcc/config/riscv/t-riscv









@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \









  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \









  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \









  $(srcdir)/config/riscv/riscv-vector-builtins-types.def \









+  $(srcdir)/config/riscv/thead-vector-builtins.h \









+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \









  $(RISCV_BUILTINS_H)









$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









$(srcdir)/config/riscv/riscv-vector-builtins.cc









@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \









$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc









+thead-vector-builtins.o: \









+  $(srcdir)/config/riscv/thead-vector-builtins.cc \









+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \









+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \









+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \









+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \









+  rtx-vector-builder.h \









+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \









+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \









+  $(srcdir)/config/riscv/thead-vector-builtins.h \









+  $(RISCV_BUILTINS_H)









+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









+ $(srcdir)/config/riscv/thead-vector-builtins.cc









+









riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \









  $(SYSTEM_H) $(TM_H)









$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def









new file mode 100644









index 00000000000..a85ca24cb31









--- /dev/null









+++ b/gcc/config/riscv/thead-vector-builtins-functions.def









@@ -0,0 +1,627 @@









+#ifndef DEF_RVV_FUNCTION









+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)









+#endif









+









+#ifndef DEF_THEAD_RVV_FUNCTION









+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)









+#endif









+









+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT









+/* Internal helper functions for gimple fold use.  */









+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)









+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)









+









+/* 6. Configuration-Setting Instructions.  */









+









+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)









+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)









+









+/* 7. Vector Loads and Stores. */









+









+// 7.4. Vector Unit-Stride Instructions









+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)









+









+// 7.5. Vector Strided Instructions









+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)









+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)









+









+// 7.6. Vector Indexed Instructions









+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)









+









+// 7.7. Unit-stride Fault-Only-First Loads









+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)









+









+// TODO: 7.8. Vector Load/Store Segment Instructions









+









+/* 11. Vector Integer Arithmetic Instructions.  */









+









+// 11.1. Vector Single-Width Integer Add and Subtract









+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)









+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)









+









+// 11.2. Vector Widening Integer Add/Subtract









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)









+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)









+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)









+









+// 11.3. Vector Integer Extension









+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)









+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)









+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)









+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)









+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)









+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)









+









+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions









+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)









+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)









+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)









+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)









+









+// 11.5. Vector Bitwise Logical Instructions









+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)









+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)









+









+// 11.6. Vector Single-Width Shift Instructions









+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)









+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)









+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)









+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)









+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)









+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)









+









+// 11.7. Vector Narrowing Integer Right Shift Instructions









+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)









+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)









+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)









+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)









+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)









+









+// 11.8. Vector Integer Compare Instructions









+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)









+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)









+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)









+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)









+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)









+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)









+









+// 11.9. Vector Integer Min/Max Instructions









+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)









+









+// 11.10. Vector Single-Width Integer Multiply Instructions









+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)









+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)









+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)









+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)









+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)









+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)









+









+// 11.11. Vector Integer Divide Instructions









+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)









+









+// 11.12. Vector Widening Integer Multiply Instructions









+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)









+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)









+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)









+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)









+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)









+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)









+









+// 11.13. Vector Single-Width Integer Multiply-Add Instructions









+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)









+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)









+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)









+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)









+









+// 11.14. Vector Widening Integer Multiply-Add Instructions









+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)









+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)









+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)









+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)









+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)









+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)









+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)









+









+// 11.15. Vector Integer Merge Instructions









+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)









+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)









+









+// 11.16 Vector Integer Move Instructions









+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)









+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)









+









+/* 12. Vector Fixed-Point Arithmetic Instructions. */









+









+// 12.1. Vector Single-Width Saturating Add and Subtract









+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)









+









+// 12.2. Vector Single-Width Averaging Add and Subtract









+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)









+









+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation









+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)









+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)









+









+// 12.4. Vector Single-Width Scaling Shift Instructions









+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)









+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)









+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)









+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)









+









+// 12.5. Vector Narrowing Fixed-Point Clip Instructions









+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)









+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)









+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)









+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)









+









+/* 13. Vector Floating-Point Instructions.  */









+









+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions









+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)









+









+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)









+









+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions









+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)









+









+// 13.5. Vector Widening Floating-Point Multiply









+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)









+









+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions









+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)









+









+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)









+









+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions









+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)









+









+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)









+









+// 13.8. Vector Floating-Point Square-Root Instruction









+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)









+









+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)









+









+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction









+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)









+









+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction









+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)









+









+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)









+









+// 13.11. Vector Floating-Point MIN/MAX Instructions









+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)









+









+// 13.12. Vector Floating-Point Sign-Injection Instructions









+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)









+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)









+









+// 13.13. Vector Floating-Point Compare Instructions









+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)









+









+// 13.14. Vector Floating-Point Classify Instruction









+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)









+









+// 13.15. Vector Floating-Point Merge Instruction









+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)









+









+// 13.16. Vector Floating-Point Move Instruction









+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)









+









+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions









+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)









+









+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)









+









+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions









+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)









+









+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)









+









+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions









+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)









+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)









+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)









+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)









+









+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)









+









+/* 14. Vector Reduction Operations.  */









+









+// 14.1. Vector Single-Width Integer Reduction Instructions









+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)









+









+// 14.2. Vector Widening Integer Reduction Instructions









+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)









+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)









+









+// 14.3. Vector Single-Width Floating-Point Reduction Instructions









+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)









+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)









+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)









+









+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)









+









+// 14.4. Vector Widening Floating-Point Reduction Instructions









+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)









+









+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)









+









+/* 15. Vector Mask Instructions.  */









+









+// 15.1. Vector Mask-Register Logical Instructions









+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)









+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)









+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)









+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)









+// 15.2. Vector count population in mask vcpop.m









+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)









+// 15.3. vfirst find-first-set mask bit









+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)









+// 15.4. vmsbf.m set-before-first mask bit









+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)









+// 15.5. vmsif.m set-including-first mask bit









+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)









+// 15.6. vmsof.m set-only-first mask bit









+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)









+// 15.8. Vector Iota Instruction









+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)









+// 15.9. Vector Element Index Instruction









+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)









+









+/* 16. Vector Permutation Instructions.  */









+









+// 16.1. Integer Scalar Move Instructions









+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)









+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)









+









+// 16.2. Floating-Point Scalar Move Instructions









+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)









+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)









+









+// 16.3. Vector Slide Instructions









+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)









+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)









+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)









+









+// 16.4. Vector Register Gather Instructions









+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)









+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)









+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)









+









+// 16.5. Vector Compress Instruction









+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)









+









+/* Miscellaneous Vector Functions.  */









+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)









+









+// Tuple types









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)









+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)









+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)









+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)









+#undef REQUIRED_EXTENSIONS









+









+#undef DEF_RVV_FUNCTION









+#undef DEF_THEAD_RVV_FUNCTION









\ No newline at end of file









diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc









new file mode 100644









index 00000000000..9d84ed39937









--- /dev/null









+++ b/gcc/config/riscv/thead-vector-builtins.cc









@@ -0,0 +1,746 @@









+/* function_base implementation for RISC-V XTheadVector Extension









+   for GNU compiler.









+   Copyright (C) 2022-2023 Free Software Foundation, Inc.









+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head









+   Semiconductor Co., Ltd.









+









+   This file is part of GCC.









+









+   GCC is free software; you can redistribute it and/or modify it









+   under the terms of the GNU General Public License as published by









+   the Free Software Foundation; either version 3, or (at your option)









+   any later version.









+









+   GCC is distributed in the hope that it will be useful, but









+   WITHOUT ANY WARRANTY; without even the implied warranty of









+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU









+   General Public License for more details.









+









+   You should have received a copy of the GNU General Public License









+   along with GCC; see the file COPYING3.  If not see









+   <http://www.gnu.org/licenses/>.  */









+









+#include "config.h"









+#include "system.h"









+#include "coretypes.h"









+#include "tm.h"









+#include "tree.h"









+#include "rtl.h"









+#include "tm_p.h"









+#include "memmodel.h"









+#include "insn-codes.h"









+#include "optabs.h"









+#include "recog.h"









+#include "expr.h"









+#include "basic-block.h"









+#include "function.h"









+#include "fold-const.h"









+#include "gimple.h"









+#include "gimple-iterator.h"









+#include "gimplify.h"









+#include "explow.h"









+#include "emit-rtl.h"









+#include "tree-vector-builder.h"









+#include "rtx-vector-builder.h"









+#include "riscv-vector-builtins.h"









+#include "riscv-vector-builtins-shapes.h"









+#include "riscv-vector-builtins-bases.h"









+#include "thead-vector-builtins.h"









+









+using namespace riscv_vector;









+









+namespace riscv_vector {









+









+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */









+template<bool VLMAX_P>









+class th_vsetvl : public function_base









+{









+public:









+  bool apply_vl_p () const override









+  {









+    return false;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    if (VLMAX_P)









+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));









+    else









+      e.add_input_operand (0);









+









+    tree type = builtin_types[e.type.index].vector;









+    machine_mode mode = TYPE_MODE (type);









+









+    machine_mode inner_mode = GET_MODE_INNER (mode);









+    /* SEW.  */









+    e.add_input_operand (Pmode,









+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));









+









+    /* LMUL.  */









+    e.add_input_operand (Pmode,









+      gen_int_mode (get_vlmul (mode), Pmode));









+









+    /* TAIL_ANY.  */









+    e.add_input_operand (Pmode,









+ gen_int_mode (get_prefer_tail_policy (), Pmode));









+









+    /* MASK_ANY.  */









+    e.add_input_operand (Pmode,









+ gen_int_mode (get_prefer_mask_policy (), Pmode));









+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));









+  }









+};









+









+/* Implements









+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v









+ * codegen.  */









+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>









+class th_loadstore : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return !STORE_P; }









+  bool apply_mask_policy_p () const override { return !STORE_P; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    if (STORE_P)









+      return CP_WRITE_MEMORY;









+    else









+      return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    if (STORE_P || LST_TYPE == LST_INDEXED)









+      return true;









+    return pred != PRED_TYPE_none;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    if (LST_TYPE == LST_INDEXED)









+      {









+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;









+ if (STORE_P)









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_store (unspec, e.vector_mode (),









+       e.index_mode ()));









+ else









+   {









+     unsigned src_eew_bitsize









+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));









+     unsigned dst_eew_bitsize









+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));









+     if (dst_eew_bitsize == src_eew_bitsize)









+       {









+ return e.use_exact_insn (









+   code_for_pred_th_indexed_load_same_eew (









+     unspec, e.vector_mode ()));









+       }









+     else if (dst_eew_bitsize > src_eew_bitsize)









+       {









+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;









+ switch (factor)









+   {









+   case 2:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x2_greater_eew (









+ unspec, e.vector_mode ()));









+   case 4:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x4_greater_eew (









+ unspec, e.vector_mode ()));









+   case 8:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x8_greater_eew (









+ unspec, e.vector_mode ()));









+   default:









+     gcc_unreachable ();









+   }









+       }









+     else









+       {









+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;









+ switch (factor)









+   {









+   case 2:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x2_smaller_eew (









+ unspec, e.vector_mode ()));









+   case 4:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x4_smaller_eew (









+ unspec, e.vector_mode ()));









+   case 8:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x8_smaller_eew (









+ unspec, e.vector_mode ()));









+   default:









+     gcc_unreachable ();









+   }









+       }









+   }









+      }









+    else if (LST_TYPE == LST_STRIDED)









+      {









+ if (STORE_P)









+   return e.use_contiguous_store_insn (









+     code_for_pred_th_strided_store (e.vector_mode ()));









+ else









+   return e.use_contiguous_load_insn (









+     code_for_pred_th_strided_load (e.vector_mode ()));









+      }









+    else









+      {









+ if (STORE_P)









+   return e.use_contiguous_store_insn (









+     code_for_pred_th_store (e.vector_mode ()));









+ else









+   return e.use_contiguous_load_insn (









+     code_for_pred_mov (e.vector_mode ()));









+      }









+  }









+};









+









+/* Implements vneg/vnot.  */









+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>









+class th_unop : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));









+  }









+};









+









+/* Implements vnsrl/vnsra.  */









+template<rtx_code CODE>









+class th_vnshift : public function_base









+{









+public:









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_wx:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));









+      case OP_TYPE_wv:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow (CODE, e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vncvt.  */









+class th_vncvt_x : public function_base









+{









+public:









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_trunc (e.vector_mode ()));









+  }









+};









+









+/* Implements vnclip/vnclipu.  */









+template<int UNSPEC>









+class th_vnclip : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override { return true; }









+









+  bool may_require_vxrm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_wx:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));









+      case OP_TYPE_wv:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vcpop.  */









+class th_vcpop : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_popcount (e.vector_mode (), Pmode));









+  }









+};









+









+/* Implements vfirst.  */









+class th_vfirst : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_ffs (e.vector_mode (), Pmode));









+  }









+};









+









+/* Implements vmadc.  */









+class th_vmadc : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool use_mask_predication_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_vvm:









+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));









+      case OP_TYPE_vxm:









+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));









+      case OP_TYPE_vv:









+ return e.use_exact_insn (









+   code_for_pred_th_madc_overflow (e.vector_mode ()));









+      case OP_TYPE_vx:









+ return e.use_exact_insn (









+   code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vmsbc.  */









+class th_vmsbc : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool use_mask_predication_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_vvm:









+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));









+      case OP_TYPE_vxm:









+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));









+      case OP_TYPE_vv:









+ return e.use_exact_insn (









+   code_for_pred_th_msbc_overflow (e.vector_mode ()));









+      case OP_TYPE_vx:









+ return e.use_exact_insn (









+   code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vfncvt.x.  */









+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>









+class th_vfncvt_x : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));









+  }









+};









+









+template<enum frm_op_type FRM_OP = NO_FRM>









+class th_vfncvt_f : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    if (e.op_info->op == OP_TYPE_f_w)









+      return e.use_exact_insn (









+ code_for_pred_th_trunc (e.vector_mode ()));









+    if (e.op_info->op == OP_TYPE_x_w)









+      return e.use_exact_insn (









+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));









+    if (e.op_info->op == OP_TYPE_xu_w)









+      return e.use_exact_insn (









+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));









+    gcc_unreachable ();









+  }









+};









+









+/* Implements floating-point reduction instructions.  */









+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>









+class th_freducop : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  bool apply_mask_policy_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));









+  }









+};









+









+class th_vleff : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY | CP_WRITE_CSR;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  gimple *fold (gimple_folder &f) const override









+  {









+    return fold_fault_load (f);









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_contiguous_load_insn (









+      code_for_pred_th_fault_load (e.vector_mode ()));









+  }









+};









+









+/* Implements vlseg.v.  */









+class th_vlseg : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_unit_strided_load (e.vector_mode ()));









+  }









+};









+









+/* Implements vsseg.v.  */









+class th_vsseg : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_WRITE_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_unit_strided_store (e.vector_mode ()));









+  }









+};









+









+/* Implements vlsseg.v.  */









+class th_vlsseg : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_strided_load (e.vector_mode ()));









+  }









+};









+









+/* Implements vssseg.v.  */









+class th_vssseg : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_WRITE_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_strided_store (e.vector_mode ()));









+  }









+};









+









+template<int UNSPEC>









+class th_seg_indexed_load : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_indexed_load (









+ UNSPEC, e.vector_mode (), e.index_mode ()));









+  }









+};









+









+template<int UNSPEC>









+class th_seg_indexed_store : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_WRITE_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_indexed_store (









+ UNSPEC, e.vector_mode (), e.index_mode ()));









+  }









+};









+









+/* Implements vlsegff.v.  */









+class th_vlsegff : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY | CP_WRITE_CSR;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  gimple *fold (gimple_folder &f) const override









+  {









+    return fold_fault_load (f);









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_fault_load (e.vector_mode ()));









+  }









+};









+









+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;









+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;









+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;









+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;









+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;









+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;









+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;









+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;









+static CONSTEXPR const th_unop<NEG> th_vneg_obj;









+static CONSTEXPR const th_unop<NOT> th_vnot_obj;









+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;









+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;









+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;









+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;









+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;









+static CONSTEXPR const th_vcpop th_vcpop_obj;









+static CONSTEXPR const th_vfirst th_vfirst_obj;









+static CONSTEXPR const th_vmadc th_vmadc_obj;









+static CONSTEXPR const th_vmsbc th_vmsbc_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;









+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;









+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;









+static CONSTEXPR const th_vleff th_vleff_obj;









+static CONSTEXPR const th_vlseg th_vlseg_obj;









+static CONSTEXPR const th_vsseg th_vsseg_obj;









+static CONSTEXPR const th_vlsseg th_vlsseg_obj;









+static CONSTEXPR const th_vssseg th_vssseg_obj;









+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;









+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;









+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;









+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;









+static CONSTEXPR const th_vlsegff th_vlsegff_obj;









+









+/* Declare the function base NAME, pointing it to an instance









+   of class <NAME>_obj.  */









+#define BASE(NAME) \









+  namespace bases { const function_base *const NAME = &NAME##_obj; }









+









+BASE (th_vsetvl)









+BASE (th_vsetvlmax)









+BASE (th_vle)









+BASE (th_vse)









+BASE (th_vlm)









+BASE (th_vsm)









+BASE (th_vlse)









+BASE (th_vsse)









+BASE (th_vluxei8)









+BASE (th_vluxei16)









+BASE (th_vluxei32)









+BASE (th_vluxei64)









+BASE (th_vloxei8)









+BASE (th_vloxei16)









+BASE (th_vloxei32)









+BASE (th_vloxei64)









+BASE (th_vsuxei8)









+BASE (th_vsuxei16)









+BASE (th_vsuxei32)









+BASE (th_vsuxei64)









+BASE (th_vsoxei8)









+BASE (th_vsoxei16)









+BASE (th_vsoxei32)









+BASE (th_vsoxei64)









+BASE (th_vneg)









+BASE (th_vnot)









+BASE (th_vnsrl)









+BASE (th_vnsra)









+BASE (th_vncvt_x)









+BASE (th_vnclip)









+BASE (th_vnclipu)









+BASE (th_vcpop)









+BASE (th_vfirst)









+BASE (th_vmadc)









+BASE (th_vmsbc)









+BASE (th_vfncvt_x)









+BASE (th_vfncvt_x_frm)









+BASE (th_vfncvt_xu)









+BASE (th_vfncvt_xu_frm)









+BASE (th_vfncvt_f)









+BASE (th_vfncvt_f_frm)









+BASE (th_vfredusum)









+BASE (th_vfredusum_frm)









+BASE (th_vfredosum)









+BASE (th_vfredosum_frm)









+BASE (th_vfwredusum)









+BASE (th_vfwredusum_frm)









+BASE (th_vfwredosum)









+BASE (th_vfwredosum_frm)









+BASE (th_vleff)









+BASE (th_vlseg)









+BASE (th_vsseg)









+BASE (th_vlsseg)









+BASE (th_vssseg)









+BASE (th_vluxseg)









+BASE (th_vloxseg)









+BASE (th_vsuxseg)









+BASE (th_vsoxseg)









+BASE (th_vlsegff)









+









+} // end namespace riscv_vector









diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h









new file mode 100644









index 00000000000..d0bf00b8e81









--- /dev/null









+++ b/gcc/config/riscv/thead-vector-builtins.h









@@ -0,0 +1,92 @@









+/* function_base declaration for RISC-V XTheadVector Extension









+   for GNU compiler.









+   Copyright (C) 2022-2023 Free Software Foundation, Inc.









+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head









+   Semiconductor Co., Ltd.









+









+   This file is part of GCC.









+









+   GCC is free software; you can redistribute it and/or modify it









+   under the terms of the GNU General Public License as published by









+   the Free Software Foundation; either version 3, or (at your option)









+   any later version.









+









+   GCC is distributed in the hope that it will be useful, but









+   WITHOUT ANY WARRANTY; without even the implied warranty of









+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU









+   General Public License for more details.









+









+   You should have received a copy of the GNU General Public License









+   along with GCC; see the file COPYING3.  If not see









+   <http://www.gnu.org/licenses/>.  */









+









+#ifndef GCC_THEAD_VECTOR_BUILTINS_H









+#define GCC_THEAD_VECTOR_BUILTINS_H









+









+namespace riscv_vector {









+









+namespace bases {









+extern const function_base *const th_vsetvl;









+extern const function_base *const th_vsetvlmax;









+extern const function_base *const th_vle;









+extern const function_base *const th_vse;









+extern const function_base *const th_vlm;









+extern const function_base *const th_vsm;









+extern const function_base *const th_vlse;









+extern const function_base *const th_vsse;









+extern const function_base *const th_vluxei8;









+extern const function_base *const th_vluxei16;









+extern const function_base *const th_vluxei32;









+extern const function_base *const th_vluxei64;









+extern const function_base *const th_vloxei8;









+extern const function_base *const th_vloxei16;









+extern const function_base *const th_vloxei32;









+extern const function_base *const th_vloxei64;









+extern const function_base *const th_vsuxei8;









+extern const function_base *const th_vsuxei16;









+extern const function_base *const th_vsuxei32;









+extern const function_base *const th_vsuxei64;









+extern const function_base *const th_vsoxei8;









+extern const function_base *const th_vsoxei16;









+extern const function_base *const th_vsoxei32;









+extern const function_base *const th_vsoxei64;









+extern const function_base *const th_vneg;









+extern const function_base *const th_vnot;









+extern const function_base *const th_vnsrl;









+extern const function_base *const th_vnsra;









+extern const function_base *const th_vncvt_x;









+extern const function_base *const th_vnclip;









+extern const function_base *const th_vnclipu;









+extern const function_base *const th_vcpop;









+extern const function_base *const th_vfirst;









+extern const function_base *const th_vmadc;









+extern const function_base *const th_vmsbc;









+extern const function_base *const th_vfncvt_x;









+extern const function_base *const th_vfncvt_x_frm;









+extern const function_base *const th_vfncvt_xu;









+extern const function_base *const th_vfncvt_xu_frm;









+extern const function_base *const th_vfncvt_f;









+extern const function_base *const th_vfncvt_f_frm;









+extern const function_base *const th_vfredusum;









+extern const function_base *const th_vfredusum_frm;









+extern const function_base *const th_vfredosum;









+extern const function_base *const th_vfredosum_frm;









+extern const function_base *const th_vfwredusum;









+extern const function_base *const th_vfwredusum_frm;









+extern const function_base *const th_vfwredosum;









+extern const function_base *const th_vfwredosum_frm;









+extern const function_base *const th_vleff;









+extern const function_base *const th_vlseg;









+extern const function_base *const th_vsseg;









+extern const function_base *const th_vlsseg;









+extern const function_base *const th_vssseg;









+extern const function_base *const th_vluxseg;









+extern const function_base *const th_vloxseg;









+extern const function_base *const th_vsuxseg;









+extern const function_base *const th_vsoxseg;









+extern const function_base *const th_vlsegff;









+}









+









+} // end namespace riscv_vector









+









+#endif









diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md









new file mode 100644









index 00000000000..072fb5e68e1









--- /dev/null









+++ b/gcc/config/riscv/thead-vector.md









@@ -0,0 +1,2574 @@









+(define_c_enum "unspec" [









+  UNSPEC_TH_VWLDST









+])









+









+(define_int_attr th_order [









+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")









+])









+









+(define_int_attr th_reduc_op [









+  (UNSPEC_REDUC_SUM "redsum")









+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")









+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")









+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")









+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")









+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")









+])









+









+(define_code_iterator neg_unop [neg])









+(define_code_iterator not_unop [not])









+









+(define_code_iterator any_float_unop_neg [neg])









+(define_code_iterator any_float_unop_abs [abs])









+









+(define_mode_iterator V_VLS_VT [V VLS VT])









+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])









+









+(define_split









+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")









+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]









+  "TARGET_XTHEADVECTOR"









+  [(const_int 0)]









+  {









+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],









+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));









+    DONE;









+  })









+









+(define_insn_and_split "@pred_th_whole_mov<mode>"









+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")









+ (unspec:V_VLS_VT









+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")









+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")









+    (match_operand 3 "const_1_operand"         "  i, i, i")









+    (reg:SI VL_REGNUM)









+    (reg:SI VTYPE_REGNUM)]









+ UNSPEC_TH_VWLDST))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vmv.v.v\t%0,%1









+   vle.v\t%0,%1









+   vse.v\t%1,%0"









+  "&& REG_P (operands[0]) && REG_P (operands[1])









+   && REGNO (operands[0]) == REGNO (operands[1])"









+  [(const_int 0)]









+  ""









+  [(set_attr "type" "vimov,vlds,vlds")









+   (set_attr "mode" "<MODE>")









+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))









+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))









+   (set (attr "avl_type_idx") (const_int 3))









+   (set_attr "vl_op_idx" "2")])









+









+(define_insn_and_split "@pred_th_whole_mov<mode>"









+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")









+ (unspec:VB









+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")









+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")









+    (match_operand 3 "const_1_operand"         "  i, i, i")









+    (reg:SI VL_REGNUM)









+    (reg:SI VTYPE_REGNUM)]









+ UNSPEC_TH_VWLDST))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vmv.v.v\t%0,%1









+   vle.v\t%0,%1









+   vse.v\t%1,%0"









+  "&& REG_P (operands[0]) && REG_P (operands[1])









+   && REGNO (operands[0]) == REGNO (operands[1])"









+  [(const_int 0)]









+  ""









+  [(set_attr "type" "vimov,vlds,vlds")









+   (set_attr "mode" "<MODE>")









+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))









+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))









+   (set (attr "avl_type_idx") (const_int 3))









+   (set_attr "vl_op_idx" "2")









+   (set (attr "sew") (const_int 8))









+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])









+









+(define_expand "@pred_th_mov<mode>"









+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")









+    (if_then_else:V_VLS









+      (unspec:<VM>









+        [(match_operand:<VM> 1 "vector_mask_operand")









+         (match_operand 4 "vector_length_operand")









+         (match_operand 5 "const_int_operand")









+         (match_operand 6 "const_int_operand")









+         (match_operand 7 "const_int_operand")









+         (reg:SI VL_REGNUM)









+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+      (match_operand:V_VLS 3 "vector_move_operand")









+      (match_operand:V_VLS 2 "vector_merge_operand")))]









+  "TARGET_XTHEADVECTOR"









+  {})









+









+(define_insn_and_split "*pred_broadcast<mode>"









+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")









+ (if_then_else:V_VLSI









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")









+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (vec_duplicate:V_VLSI









+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))









+   (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vmv.v.x\t%0,%3









+   vmv.v.x\t%0,%3









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero









+   vlse.v\t%0,%3,zero









+   vmv.s.x\t%0,%3









+   vmv.s.x\t%0,%3"









+  "(register_operand (operands[3], <VEL>mode)









+  || CONST_POLY_INT_P (operands[3]))









+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"









+  [(set (match_dup 0)









+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)









+      (match_dup 5) (match_dup 6) (match_dup 7)









+      (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (vec_duplicate:V_VLSI (match_dup 3))









+   (match_dup 2)))]









+  {









+    gcc_assert (can_create_pseudo_p ());









+    if (CONST_POLY_INT_P (operands[3]))









+      {









+ rtx tmp = gen_reg_rtx (<VEL>mode);









+ emit_move_insn (tmp, operands[3]);









+ operands[3] = tmp;









+      }









+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),









+ GET_MODE_ALIGNMENT (<VEL>mode));









+    m = validize_mem (m);









+    emit_move_insn (m, operands[3]);









+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));









+    operands[3] = m;









+









+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:









+       andi a2,a2,1









+       vsetvl zero,a2,e64









+       vlse64.v  */









+    if (satisfies_constraint_Wb1 (operands[1]))









+      {









+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);









+ operands[1] = CONSTM1_RTX (<VM>mode);









+      }









+  }









+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_broadcast<mode>"









+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")









+ (if_then_else:V_VLSF_ZVFHMIN









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")









+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (vec_duplicate:V_VLSF_ZVFHMIN









+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))









+   (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vfmv.v.f\t%0,%3









+   vfmv.v.f\t%0,%3









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero









+   vlse.v\t%0,%3,zero









+   vfmv.s.f\t%0,%3









+   vfmv.s.f\t%0,%3"









+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")









+   (set_attr "mode" "<MODE>")])









+









+;; vle.v/vse.v,vmv.v.v









+(define_insn_and_split "*pred_th_mov<mode>"









+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")









+    (if_then_else:V_VLS









+      (unspec:<VM>









+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")









+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")









+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")









+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")









+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")









+         (reg:SI VL_REGNUM)









+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")









+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]









+  "(TARGET_XTHEADVECTOR









+    && (register_operand (operands[0], <MODE>mode)









+        || register_operand (operands[3], <MODE>mode)))"









+  "@









+   vle.v\t%0,%3%p1









+   vle.v\t%0,%3









+   vle.v\t%0,%3,%1.t









+   vse.v\t%3,%0%p1









+   vmv.v.v\t%0,%3









+   vmv.v.v\t%0,%3"









+  "&& register_operand (operands[0], <MODE>mode)









+   && register_operand (operands[3], <MODE>mode)









+   && satisfies_constraint_vu (operands[2])









+   && INTVAL (operands[7]) == riscv_vector::VLMAX"









+  [(set (match_dup 0) (match_dup 3))]









+  ""









+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn_and_split "@pred_th_mov<mode>"









+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")









+ (if_then_else:VB_VLS









+   (unspec:VB_VLS









+     [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")









+      (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")









+      (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")









+   (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   #









+   #









+   vmcpy.m\t%0,%3









+   vmclr.m\t%0









+   vmset.m\t%0"









+  "&& !reload_completed"









+  [(const_int 0)]









+  {









+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))









+        || (REG_P (operands[0]) && REG_P (operands[3])









+     && INTVAL (operands[5]) == riscv_vector::VLMAX))









+      {









+ emit_move_insn (operands[0], operands[3]);









+ DONE;









+      }









+









+    FAIL;









+  }









+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_store<mode>"









+  [(set (match_operand:V 0 "memory_operand"                 "+m")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")









+      (match_operand 3 "vector_length_operand"    "   rK")









+      (match_operand 4 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operand:V 2 "register_operand"         "    vr")









+   (match_dup 0)))]









+  "TARGET_XTHEADVECTOR"









+  "vse.v\t%2,%0%p1"









+  [(set_attr "type" "vste")









+   (set_attr "mode" "<MODE>")









+   (set (attr "avl_type_idx") (const_int 4))









+   (set_attr "vl_op_idx" "3")])









+









+(define_insn "@pred_th_strided_load<mode>"









+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")









+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")









+      (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")









+      (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)









+   (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+  vlse.v\t%0,%3,%z4%p1









+  vlse.v\t%0,%3,%z4









+  vlse.v\t%0,%3,%z4,%1.t









+  vle.v\t%0,%3%p1









+  vle.v\t%0,%3









+  vle.v\t%0,%3,%1.t"









+  [(set_attr "type" "vlds")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_strided_store<mode>"









+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK,       rK")









+      (match_operand 5 "const_int_operand"        "    i,        i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")









+      (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)









+   (match_dup 0)))]









+  "TARGET_XTHEADVECTOR"









+  "@









+  vsse.v\t%3,%0,%z2%p1









+  vse.v\t%3,%0%p1"









+  [(set_attr "type" "vsts")









+   (set_attr "mode" "<MODE>")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+









+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"









+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")









+      (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")









+      (match_operand 6 "const_int_operand"         "  i,  i, i,  i")









+      (match_operand 7 "const_int_operand"         "  i,  i, i,  i")









+      (match_operand 8 "const_int_operand"         "  i,  i, i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)









+   (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+;; DEST eew is greater than SOURCE eew.









+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"









+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")









+ (if_then_else:VEEWEXT2









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "    i,    i")









+      (match_operand 7 "const_int_operand"                      "    i,    i")









+      (match_operand 8 "const_int_operand"                      "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWEXT2









+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)









+   (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"









+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")









+ (if_then_else:VEEWEXT4









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "    i,    i")









+      (match_operand 7 "const_int_operand"                      "    i,    i")









+      (match_operand 8 "const_int_operand"                      "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWEXT4









+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)









+   (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"









+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")









+ (if_then_else:VEEWEXT8









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "    i,    i")









+      (match_operand 7 "const_int_operand"                      "    i,    i")









+      (match_operand 8 "const_int_operand"                      "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWEXT8









+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)









+   (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+;; DEST eew is smaller than SOURCE eew.









+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"









+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")









+ (if_then_else:VEEWTRUNC2









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWTRUNC2









+     [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)









+   (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"









+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")









+ (if_then_else:VEEWTRUNC4









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWTRUNC4









+     [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)









+   (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"









+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")









+ (if_then_else:VEEWTRUNC8









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWTRUNC8









+     [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)









+   (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO64I 2 "register_operand" "  vr")









+    (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO64:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO32I 2 "register_operand" "  vr")









+    (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO32:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO16I 2 "register_operand" "  vr")









+    (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO16:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO8I 2 "register_operand" "  vr")









+    (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO8:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO4I 2 "register_operand" "  vr")









+    (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO4:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")









+    (match_operand:RATIO2I 2 "register_operand"  "  vr")









+    (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO2:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")









+    (match_operand:RATIO1 2 "register_operand"   "  vr")









+    (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO1:MODE>")])









+









+(define_insn "@pred_th_popcount<VB:mode><P:mode>"









+  [(set (match_operand:P 0 "register_operand"               "=r")









+ (popcount:P









+   (unspec:VB









+     [(and:VB









+        (match_operand:VB 1 "vector_mask_operand" "vmWc1")









+        (match_operand:VB 2 "register_operand"    "   vr"))









+      (match_operand 3 "vector_length_operand"    "   rK")









+      (match_operand 4 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]









+  "TARGET_XTHEADVECTOR"









+  "vmpopc.m\t%0,%2%p1"









+  [(set_attr "type" "vmpop")









+   (set_attr "mode" "<VB:MODE>")])









+









+(define_insn "@pred_th_ffs<VB:mode><P:mode>"









+  [(set (match_operand:P 0 "register_operand"                 "=r")









+ (plus:P









+   (ffs:P









+     (unspec:VB









+       [(and:VB









+          (match_operand:VB 1 "vector_mask_operand" "vmWc1")









+          (match_operand:VB 2 "register_operand"    "   vr"))









+        (match_operand 3 "vector_length_operand"    "   rK")









+        (match_operand 4 "const_int_operand"        "    i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))









+   (const_int -1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmfirst.m\t%0,%2%p1"









+  [(set_attr "type" "vmffs")









+   (set_attr "mode" "<VB:MODE>")])









+









+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"









+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<VNCONVERT>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:<VNCONVERT>









+      [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)









+   (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"









+  [(set_attr "type" "vfncvtftoi")









+   (set_attr "mode" "<VNCONVERT>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_narrow_<float_cvt><mode>"









+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<VNCONVERT>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float:<VNCONVERT>









+      (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))









+   (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vfncvt.f.x<u>.v\t%0,%3%p1"









+  [(set_attr "type" "vfncvtitof")









+   (set_attr "mode" "<VNCONVERT>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_narrow_<optab><mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (truncate:<V_DOUBLE_TRUNC>









+     (any_shiftrt:VWEXTI









+      (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")









+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vn<insn>.v%o4\t%0,%3,%v4%p1"









+  [(set_attr "type" "vnshift")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+(define_insn "@pred_th_narrow_<optab><mode>_scalar"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (truncate:<V_DOUBLE_TRUNC>









+     (any_shiftrt:VWEXTI









+      (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")









+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vn<insn>.v%o4\t%0,%3,%4%p1"









+  [(set_attr "type" "vnshift")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+(define_insn "@pred_th_trunc<mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (truncate:<V_DOUBLE_TRUNC>









+     (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vnsrl.vx\t%0,%3,x0%p1"









+  [(set_attr "type" "vnshift")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_trunc<mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (float_truncate:<V_DOUBLE_TRUNC>









+      (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vfncvt.f.f.v\t%0,%3%p1"









+  [(set_attr "type" "vfncvtftof")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_fault_load<mode>"









+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")









+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")









+      (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)









+   (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))









+   (set (reg:SI VL_REGNUM)









+   (unspec:SI









+     [(if_then_else:V









+        (unspec:<VM>









+ [(match_dup 1) (match_dup 4) (match_dup 5)









+ (match_dup 6) (match_dup 7)









+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+        (unspec:V [(match_dup 3)] UNSPEC_VLEFF)









+        (match_dup 2))] UNSPEC_MODIFY_VL))]









+  "TARGET_XTHEADVECTOR"









+  "vleff.v\t%0,%3%p1"









+  [(set_attr "type" "vldff")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_unit_strided_load<mode>"









+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")









+ (if_then_else:VT









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")









+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")









+      (match_operand 5 "const_int_operand"        "    i,     i,     i")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VT









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)









+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]









+  "TARGET_XTHEADVECTOR"









+  "vlseg<nf>e.v\t%0,(%z3)%p1"









+  [(set_attr "type" "vlsegde")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_unit_strided_store<mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+       (match_operand 3 "vector_length_operand"    "   rK")









+       (match_operand 4 "const_int_operand"        "    i")









+       (reg:SI VL_REGNUM)









+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand:VT 2 "register_operand"         "   vr")









+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]









+  "TARGET_XTHEADVECTOR"









+  "vsseg<nf>e.v\t%2,(%z1)%p0"









+  [(set_attr "type" "vssegte")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_strided_load<mode>"









+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")









+ (if_then_else:VT









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")









+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i")









+      (match_operand 8 "const_int_operand"        "    i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VT









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (mem:BLK (scratch))] UNSPEC_STRIDED)









+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]









+  "TARGET_XTHEADVECTOR"









+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"









+  [(set_attr "type" "vlsegds")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_strided_store<mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+       (match_operand 4 "vector_length_operand"    "   rK")









+       (match_operand 5 "const_int_operand"        "    i")









+       (reg:SI VL_REGNUM)









+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand:VT 3 "register_operand"         "   vr")









+    (mem:BLK (scratch))] UNSPEC_STRIDED))]









+  "TARGET_XTHEADVECTOR"









+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"









+  [(set_attr "type" "vssegts")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_fault_load<mode>"









+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")









+ (if_then_else:VT









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")









+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")









+      (match_operand 5 "const_int_operand"        "    i,     i,     i")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VT









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (mem:BLK (scratch))] UNSPEC_VLEFF)









+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))









+   (set (reg:SI VL_REGNUM)









+        (unspec:SI









+          [(if_then_else:VT









+      (unspec:<VM>









+        [(match_dup 1) (match_dup 4) (match_dup 5)









+         (match_dup 6) (match_dup 7)









+         (reg:SI VL_REGNUM)









+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+      (unspec:VT









+         [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)









+      (match_dup 2))] UNSPEC_MODIFY_VL))]









+  "TARGET_XTHEADVECTOR"









+  "vlseg<nf>eff.v\t%0,(%z3)%p1"









+  [(set_attr "type" "vlsegdff")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"









+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V1T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V1T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V1T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"









+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V2T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V2T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V2T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"









+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V4T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V4T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V4T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"









+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V8T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V8T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V8T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"









+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")









+ (if_then_else:V16T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V16T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)









+   (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V16T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"









+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")









+ (if_then_else:V32T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V32T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)









+   (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V32T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO64I 2 "register_operand"       "   vr")









+    (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V1T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO32I 2 "register_operand"       "   vr")









+    (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V2T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO16I 2 "register_operand"       "   vr")









+    (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V4T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO8I 2 "register_operand"       "   vr")









+    (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V8T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO4I 2 "register_operand"      "   vr")









+    (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V16T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO2I 2 "register_operand"      "   vr")









+    (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V32T:MODE>")])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop_neg:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vfsgnjn.vv\t%0,%3,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop_abs:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vfsgnjx.vv\t%0,%3,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")









+ (if_then_else:V_VLSI









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")









+      (match_operand 5 "const_int_operand"        " i, i,  i,  i")









+      (match_operand 6 "const_int_operand"        " i, i,  i,  i")









+      (match_operand 7 "const_int_operand"        " i, i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (not_unop:V_VLSI









+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))









+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vnot.v\t%0,%3%p1"









+  [(set_attr "type" "vialu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")









+ (if_then_else:V_VLSI









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")









+      (match_operand 5 "const_int_operand" " i, i,  i,  i")









+      (match_operand 6 "const_int_operand" " i, i,  i,  i")









+      (match_operand 7 "const_int_operand" " i, i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (neg_unop:V_VLSI









+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))









+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vrsub.vx\t%0,%3,x0%p1"









+  [(set_attr "type" "vialu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vf<insn>.v\t%0,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_narrow_clip<v_su><mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:<V_DOUBLE_TRUNC>









+     [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")









+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"









+  [(set_attr "type" "vnclip")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:<V_DOUBLE_TRUNC>









+     [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")









+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"









+  [(set_attr "type" "vnclip")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+;; Float Reduction Sum (vfred[ou]sum.vs)









+(define_insn "@pred_th_<th_reduc_op><mode>"









+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")









+ (unspec:<V_LMUL1>









+   [(unspec:<VM>









+     [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")









+      (match_operand               5 "vector_length_operand" "   rK,   rK")









+      (match_operand               6 "const_int_operand"     "    i,    i")









+      (match_operand               7 "const_int_operand"     "    i,    i")









+      (match_operand               8 "const_int_operand"     "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+           (unspec:<V_LMUL1> [









+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")









+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")









+           ] ANY_FREDUC_SUM)









+    (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]









+  "TARGET_XTHEADVECTOR"









+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"









+  [(set_attr "type" "vfred<order>")









+   (set_attr "mode" "<MODE>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)









+(define_insn "@pred_th_<th_reduc_op><mode>"









+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")









+ (unspec:<V_EXT_LMUL1>









+   [(unspec:<VM>









+     [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")









+      (match_operand                5 "vector_length_operand" "   rK,   rK")









+      (match_operand                6 "const_int_operand"     "    i,    i")









+      (match_operand                7 "const_int_operand"     "    i,    i")









+      (match_operand                8 "const_int_operand"     "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+           (unspec:<V_EXT_LMUL1> [









+      (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")









+      (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")









+           ] ANY_FWREDUC_SUM)









+    (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]









+  "TARGET_XTHEADVECTOR"









+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"









+  [(set_attr "type" "vfwred<order>")









+   (set_attr "mode" "<MODE>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_madc<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")









+ (unspec:<VM>









+    [(plus:VI









+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")









+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))









+     (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")









+        (match_operand 5 "const_int_operand"     "   i,   i,   i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.v%o2m\t%0,%1,%v2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_msbc<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")









+ (unspec:<VM>









+    [(minus:VI









+      (match_operand:VI 1 "register_operand"     "  vr")









+      (match_operand:VI 2 "register_operand"     " vr"))









+     (match_operand:<VM> 3 "register_operand"    " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand" " rK")









+        (match_operand 5 "const_int_operand"     "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vvm\t%0,%1,%2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_madc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "register_operand" "  r"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vxm\t%0,%1,%2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_msbc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_expand "@pred_th_madc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (match_operand:<VM> 3 "register_operand")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand")









+        (match_operand 5 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[4],









+ <MODE>mode,









+ riscv_vector::simm5_p (operands[2]),









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4], operands[5]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[5])))









+    DONE;









+})









+









+(define_insn "*pred_th_madc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "*pred_th_madc<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (match_operand:<VM> 3 "register_operand"          " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"       " rK")









+        (match_operand 5 "const_int_operand"           "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_expand "@pred_th_msbc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (match_operand:<VM> 3 "register_operand")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand")









+        (match_operand 5 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[4],









+ <MODE>mode,









+ false,









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4], operands[5]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[5])))









+    DONE;









+})









+









+(define_insn "*pred_th_msbc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "*pred_th_msbc<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (match_operand:<VM> 3 "register_operand"          " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"       " rK")









+        (match_operand 5 "const_int_operand"           "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_madc<mode>_overflow"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")









+ (unspec:<VM>









+    [(plus:VI









+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")









+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")









+        (match_operand 4 "const_int_operand"     "   i,   i,   i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.v%o2\t%0,%1,%v2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "@pred_th_msbc<mode>_overflow"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI









+      (match_operand:VI 1 "register_operand"     "   vr")









+      (match_operand:VI 2 "register_operand"     "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand" "  rK")









+        (match_operand 4 "const_int_operand"     "   i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vv\t%0,%1,%2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "@pred_th_madc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "@pred_th_msbc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_expand "@pred_th_madc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand")









+        (match_operand 4 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[3],









+ <MODE>mode,









+ riscv_vector::simm5_p (operands[2]),









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[4])))









+    DONE;









+})









+









+(define_insn "*pred_th_madc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"       " rK")









+        (match_operand 4 "const_int_operand"           "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_expand "@pred_th_msbc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand")









+        (match_operand 4 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[3],









+ <MODE>mode,









+ false,









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[4])))









+    DONE;









+})









+









+(define_insn "*pred_th_msbc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"      " rK")









+        (match_operand 4 "const_int_operand"          "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "*th_vsetvl<mode>"









+  [(set (match_operand:P 0 "register_operand" "=r")









+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")









+    (match_operand 2 "const_int_operand" "i")









+    (match_operand 3 "const_int_operand" "i")









+    (match_operand 4 "const_int_operand" "i")









+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))









+   (set (reg:SI VL_REGNUM)









+ (unspec:SI [(match_dup 1)









+     (match_dup 2)









+     (match_dup 3)] UNSPEC_VSETVL))









+   (set (reg:SI VTYPE_REGNUM)









+ (unspec:SI [(match_dup 2)









+     (match_dup 3)









+     (match_dup 4)









+     (match_dup 5)] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "vsetvli\t%0,%1,e%2,%m3"









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "<MODE>")









+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))









+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))









+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))









+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])









+









+;; vsetvl zero,zero,vtype instruction.









+;; This pattern has no side effects and does not set X0 register.









+(define_insn "*th_vsetvl_vtype_change_only"









+  [(set (reg:SI VTYPE_REGNUM)









+ (unspec:SI









+   [(match_operand 0 "const_int_operand" "i")









+    (match_operand 1 "const_int_operand" "i")









+    (match_operand 2 "const_int_operand" "i")









+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "vsetvli\tzero,zero,e%0,%m1"









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "SI")









+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))









+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))









+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))









+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])









+









+;; vsetvl zero,rs1,vtype instruction.









+;; The reason we need this pattern since we should avoid setting X0 register









+;; in vsetvl instruction pattern.









+(define_insn "*th_vsetvl_discard_result<mode>"









+  [(set (reg:SI VL_REGNUM)









+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")









+     (match_operand 1 "const_int_operand" "i")









+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))









+   (set (reg:SI VTYPE_REGNUM)









+ (unspec:SI [(match_dup 1)









+     (match_dup 2)









+     (match_operand 3 "const_int_operand" "i")









+     (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "vsetvli\tzero,%0,e%1,%m2"









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "<MODE>")









+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))









+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))









+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))









+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])









+









+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.









+;; Since we have many optmization passes from "expand" to "reload_completed",









+;; such pattern can allow us gain benefits of these optimizations.









+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"









+  [(set (match_operand:P 0 "register_operand" "=r")









+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")









+    (match_operand 2 "const_int_operand" "i")









+    (match_operand 3 "const_int_operand" "i")









+    (match_operand 4 "const_int_operand" "i")









+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "#"









+  "&& epilogue_completed"









+  [(parallel









+    [(set (match_dup 0)









+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)









+      (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))









+     (set (reg:SI VL_REGNUM)









+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))









+     (set (reg:SI VTYPE_REGNUM)









+   (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)









+       (match_dup 5)] UNSPEC_VSETVL))])]









+  ""









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "SI")])









+









+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"        "   0")









+      (match_operand 5 "vector_length_operand"        "  rK")









+      (match_operand 6 "const_int_operand"            "   i")









+      (match_operand 7 "const_int_operand"            "   i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_ltge_operator"









+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")









+       (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"        "   0")









+      (match_operand 5 "vector_length_operand"        "  rK")









+      (match_operand 6 "const_int_operand"            "   i")









+      (match_operand 7 "const_int_operand"            "   i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "ltge_operator"









+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")









+       (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_ltge<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_ltge<mode>_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"          "  0")









+      (match_operand 5 "vector_length_operand"          " rK")









+      (match_operand 6 "const_int_operand"              "  i")









+      (match_operand 7 "const_int_operand"              "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")









+       (vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 4 "register_operand"      "  r"))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")









+       (vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"           "  0")









+      (match_operand 5 "vector_length_operand"           " rK")









+      (match_operand 6 "const_int_operand"               "  i")









+      (match_operand 7 "const_int_operand"               "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 4 "register_operand"       "  r"))









+       (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))









+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_eqne<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))









+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"           "  0")









+      (match_operand 5 "vector_length_operand"           " rK")









+      (match_operand 6 "const_int_operand"               "  i")









+      (match_operand 7 "const_int_operand"               "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 3 "register_operand"          " vr")









+       (vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 4 "register_operand"       "  r"))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"           "  0")









+      (match_operand 5 "vector_length_operand"           " rK")









+      (match_operand 6 "const_int_operand"               "  i")









+      (match_operand 7 "const_int_operand"               "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 4 "register_operand"       "  r"))









+       (match_operand:V_VLSI_D 3 "register_operand"          " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))









+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_eqne<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))









+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"          "  0")









+      (match_operand 5 "vector_length_operand"          " rK")









+      (match_operand 6 "const_int_operand"              "  i")









+      (match_operand 7 "const_int_operand"              "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 3 "register_operand"         " vr")









+       (vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 4 "register_operand" "  r")))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"            "  0")









+      (match_operand 5 "vector_length_operand"            " rK")









+      (match_operand 6 "const_int_operand"                "  i")









+      (match_operand 7 "const_int_operand"                "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 4 "register_operand"   "  r")))









+       (match_operand:V_VLSI_D 3 "register_operand"           " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))









+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))









+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")









+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vmf%B3.vv\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"          "  0")









+      (match_operand 5 "vector_length_operand"          " rK")









+      (match_operand 6 "const_int_operand"              "  i")









+      (match_operand 7 "const_int_operand"              "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "signed_order_operator"









+      [(match_operand:V_VLSF 3 "register_operand"           " vr")









+       (match_operand:V_VLSF 4 "register_operand"           " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmf%B2.vv\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")









+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vmf%B3.vv\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"         "  0")









+      (match_operand 5 "vector_length_operand"         " rK")









+      (match_operand 6 "const_int_operand"             "  i")









+      (match_operand 7 "const_int_operand"             "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "signed_order_operator"









+      [(match_operand:V_VLSF 3 "register_operand"      " vr")









+       (vec_duplicate:V_VLSF









+         (match_operand:<VEL> 4 "register_operand"     "  f"))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmf%B2.vf\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")









+       (vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"         "  0")









+      (match_operand 5 "vector_length_operand"         " rK")









+      (match_operand 6 "const_int_operand"             "  i")









+      (match_operand 7 "const_int_operand"             "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSF









+         (match_operand:<VEL> 4 "register_operand"     "  f"))









+       (match_operand:V_VLSF 3 "register_operand"      " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmf%B2.vf\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))









+       (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_eqne<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))









+       (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









\ No newline at end of file









diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md









index 5f5f7b5b986..c0fc7a2441d 100644









--- a/gcc/config/riscv/vector-iterators.md









+++ b/gcc/config/riscv/vector-iterators.md









@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [









])









(define_mode_iterator VI [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -128,11 +128,11 @@ (define_mode_iterator VI [









;; allow the instruction and mode to be matched during combine et al.









(define_mode_iterator VF [









  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")









-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -140,11 +140,11 @@ (define_mode_iterator VF [









(define_mode_iterator VF_ZVFHMIN [









  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [









])









(define_mode_iterator VEEWEXT2 [









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [









])









(define_mode_iterator VEEWEXT4 [









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [









])









(define_mode_iterator VEEWTRUNC2 [









-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









  (RVVM4SI "TARGET_64BIT")









  (RVVM2SI "TARGET_64BIT")









  (RVVM1SI "TARGET_64BIT")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









])









(define_mode_iterator VEEWTRUNC4 [









-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM2HI "TARGET_64BIT")









  (RVVM1HI "TARGET_64BIT")









-  (RVVMF2HI "TARGET_64BIT")









-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")









+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









  (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









])









(define_mode_iterator VEEWTRUNC8 [









  (RVVM1QI "TARGET_64BIT")









-  (RVVMF2QI "TARGET_64BIT")









-  (RVVMF4QI "TARGET_64BIT")









-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")









+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")









+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









])









(define_mode_iterator VEI16 [









-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [









])









(define_mode_iterator VFULLI [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")









@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [









])









(define_mode_iterator VI_QH [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









])









(define_mode_iterator VI_QHS [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")









  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")









@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [









])









(define_mode_iterator VI_QHS_NO_M8 [









-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")









  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")









@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [









(define_mode_iterator VF_HS [









  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")









-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")









  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")









@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [









  (RVVM4HF "TARGET_ZVFH")









  (RVVM2HF "TARGET_ZVFH")









  (RVVM1HF "TARGET_ZVFH")









-  (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")









  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")









@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [









])









(define_mode_iterator V_VLSI_QHS [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")









  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")









@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [









;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or









;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.









(define_mode_iterator RATIO64 [









-  (RVVMF8QI "TARGET_MIN_VLEN > 32")









-  (RVVMF4HI "TARGET_MIN_VLEN > 32")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM1DI "TARGET_VECTOR_ELEN_64")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









])









(define_mode_iterator RATIO32 [









-  RVVMF4QI









-  RVVMF2HI









+  (RVVMF4QI "!TARGET_XTHEADVECTOR")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR")









  RVVM1SI









  (RVVM2DI "TARGET_VECTOR_ELEN_64")









-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")









])









(define_mode_iterator RATIO16 [









-  RVVMF2QI









+  (RVVMF2QI "!TARGET_XTHEADVECTOR")









  RVVM1HI









  RVVM2SI









  (RVVM4DI "TARGET_VECTOR_ELEN_64")









@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [









])









(define_mode_iterator RATIO64I [









-  (RVVMF8QI "TARGET_MIN_VLEN > 32")









-  (RVVMF4HI "TARGET_MIN_VLEN > 32")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









])









(define_mode_iterator RATIO32I [









-  RVVMF4QI









-  RVVMF2HI









+  (RVVMF4QI "!TARGET_XTHEADVECTOR")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR")









  RVVM1SI









  (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









])









(define_mode_iterator RATIO16I [









-  RVVMF2QI









+  (RVVMF2QI "!TARGET_XTHEADVECTOR")









  RVVM1HI









  RVVM2SI









  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [









])









(define_mode_iterator V_FRACT [









-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









])









(define_mode_iterator VWEXTI [









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [









  (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [









(define_mode_iterator VWCONVERTI [









  (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")









-  (RVVMF2SI "TARGET_ZVFH")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")









  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")









@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [









])









(define_mode_iterator VQEXTI [









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [









;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].









(define_mode_iterator VINDEXED [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [









  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")









-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")









  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")









@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [









(define_mode_iterator V_VLS_F_CONVERT_SI [









  (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")









-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [









])









(define_mode_iterator V_VLS_F_CONVERT_DI [









-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md









index 036b2425f32..9941651341d 100644









--- a/gcc/config/riscv/vector.md









+++ b/gcc/config/riscv/vector.md









@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"









;; check. However, we need default value of SEW for vsetvl instruction since there









;; is no field for ratio in the vsetvl instruction encoding.









(define_attr "sew" ""









-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\









+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\









  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\









  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\









  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\









@@ -95,6 +95,18 @@ (define_attr "sew" ""









  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\









  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")









(const_int 8)









+ (eq_attr "mode" "RVVMF16BI")









+    (if_then_else (match_test "TARGET_XTHEADVECTOR")









+      (const_int 16)









+      (const_int 8))









+ (eq_attr "mode" "RVVMF32BI")









+    (if_then_else (match_test "TARGET_XTHEADVECTOR")









+      (const_int 32)









+      (const_int 8))









+ (eq_attr "mode" "RVVMF64BI")









+    (if_then_else (match_test "TARGET_XTHEADVECTOR")









+      (const_int 64)









+      (const_int 8))









(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\









  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\









  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\









@@ -155,9 +167,9 @@ (define_attr "vlmul" ""









(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")









(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")









(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")









- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")









- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")









- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")









+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")









+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")









+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")









(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")









(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")









(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")









@@ -428,6 +440,10 @@ (define_attr "ratio" ""









  vislide1up,vislide1down,vfslide1up,vfslide1down,\









  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")









  (const_int INVALID_ATTRIBUTE)









+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\









+        vlsegdff,vssegtux,vlsegdox,vlsegdux")









+       (match_test "TARGET_XTHEADVECTOR"))









+    (const_int INVALID_ATTRIBUTE)









(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)









(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)









(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)









@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""









(symbol_ref "riscv_vector::FRM_DYN")]









(symbol_ref "riscv_vector::FRM_NONE")))









+(include "thead-vector.md")









+









;; -----------------------------------------------------------------









;; ---- Miscellaneous Operations









;; -----------------------------------------------------------------









@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"









(define_insn "*mov<mode>_whole"









  [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")









(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "@









    vl%m1re<sew>.v\t%0,%1









    vs%m1r.v\t%1,%0









@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"









(define_insn "*mov<mode>"









  [(set (match_operand:VB 0 "register_operand" "=vr")









(match_operand:VB 1 "register_operand" " vr"))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "vmv1r.v\t%0,%1"









  [(set_attr "type" "vmov")









    (set_attr "mode" "<MODE>")])









@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"









  (any_extend:VWEXTI









    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))









  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "v<sz>ext.vf2\t%0,%3%p1"









  [(set_attr "type" "vext")









    (set_attr "mode" "<MODE>")









@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"









  (any_extend:VQEXTI









    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))









  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "v<sz>ext.vf4\t%0,%3%p1"









  [(set_attr "type" "vext")









    (set_attr "mode" "<MODE>")









@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"









  (any_extend:VOEXTI









    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))









  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "v<sz>ext.vf8\t%0,%3%p1"









  [(set_attr "type" "vext")









    (set_attr "mode" "<MODE>")









diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c









index 2e0e12aa045..2eef9e1e1a8 100644









--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c









+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c









@@ -1,4 +1,4 @@









-/* { dg-do compile } */









+/* { dg-do compile { target { ! riscv_xtheadvector } } } */









/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */









void foo0 () {__rvv_bool64_t t;}









diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c









index 3d81b179235..ef329e30785 100644









--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c









+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c









@@ -1,4 +1,4 @@









/* { dg-do compile } */









/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */









-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */









+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */









diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp









index 7f13ff0ca56..70df6b1401c 100644









--- a/gcc/testsuite/lib/target-supports.exp









+++ b/gcc/testsuite/lib/target-supports.exp









@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {









    }]









}









+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.









+# Cache the result.









+









+proc check_effective_target_riscv_xtheadvector { } {









+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {









+       #ifndef __riscv_xtheadvector









+       #error "Not __riscv_xtheadvector"









+       #endif









+    }]









+}









+









+









# Return 1 if we can execute code when using dg-add-options riscv_v









proc check_effective_target_riscv_v_ok { } {









--









2.17.1































^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 14:41           ` 回复:回复:[PATCH " joshua
  2023-12-20 14:48             ` 回复:[PATCH " 钟居哲
@ 2023-12-20 14:55             ` 钟居哲
  2023-12-20 15:21               ` 回复:回复:[PATCH " joshua
  1 sibling, 1 reply; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 14:55 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu

My first impression, you are just copying current vector.md with no patterns change,
just simply add "th_" string into pattern name.

It looks odd to me. 

Take LLVM for example, even though the build up time for LLVM match table and tablegen is not an issue for now,
they still try hard to minimize the matchtable ,optimize the tablegen.

To me this patch just double the patterns, and potentially explode the patterns of RISC-V.

I think we should optimize thead vector patterns, eliminate the redundant unnecessary patterns to avoid affecting
the build up of GCC toolchain.



juzhe.zhong@rivai.ai



 



发件人: joshua



发送时间: 2023-12-20 22:41



收件人: 钟居哲; gcc-patches



抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu



主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector



Hi Juzhe,



Yes, XTheadVector does not have vfneg.v as a pseudo instruction for vfsgnjn.vv.



We have listed all the differences between vector and xtheadvector in our spec. You may refer to it.



https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc



https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd







Joshua



















------------------------------------------------------------------



发件人:钟居哲 <juzhe.zhong@rivai.ai>



发送时间:2023年12月20日(星期三) 22:27



收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>



抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>



主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









Why do you add this ?





+(define_insn "@pred_th_<optab><mode>"



+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")



+ (if_then_else:V_VLSF



+   (unspec:<VM>



+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")



+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")



+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")



+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")



+      (reg:SI VL_REGNUM)



+      (reg:SI VTYPE_REGNUM)



+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)



+   (any_float_unop:V_VLSF



+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))



+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]



+  "TARGET_XTHEADVECTOR"



+  "vf<insn>.v\t%0,%3%p1"



+  [(set_attr "type" "<float_insn_type>")



+   (set_attr "mode" "<MODE>")



+   (set_attr "vl_op_idx" "4")



+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))



+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))



+   (set (attr "avl_type_idx") (const_int 7))



+   (set (attr "frm_mode")



+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])







Theadvector is not th.vfneg.v ?











juzhe.zhong@rivai.ai

















发件人: joshua









发送时间: 2023-12-20 22:24









收件人: 钟居哲; gcc-patches









抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu









主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









Hi Juzhe,

















The patterns you supposed redundant are all necessary, because they generate different instructions from vector.









Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions,









and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.

















Joshua

































































------------------------------------------------------------------









发件人:钟居哲 <juzhe.zhong@rivai.ai>









发送时间:2023年12月20日(星期三) 22:00









收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>









抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>









主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector

















+// 7.6. Vector Indexed Instructions









+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)

















Why do you add these ?

















+(define_insn "@pred_th_unit_strided_store<mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+       (match_operand 3 "vector_length_operand"    "   rK")









+       (match_operand 4 "const_int_operand"        "    i")









+       (reg:SI VL_REGNUM)









+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand:VT 2 "register_operand"         "   vr")









+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]









+  "TARGET_XTHEADVECTOR"









+  "vsseg<nf>e.v\t%2,(%z1)%p0"









+  [(set_attr "type" "vssegte")









+   (set_attr "mode" "<MODE>")])

















These patterns are redundant just names are different.









They should be removed.









juzhe.zhong@rivai.ai

















From: Jun Sha (Joshua)









Date: 2023-12-20 20:34









To: gcc-patches









CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu









Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









This patch is to handle the differences in instruction generation









between Vector and XTheadVector, adding th. prefix









to all XTheadVector instructions is not included.

















For some vector patterns that cannot be avoided, we use









!TARGET_XTHEADVECTOR to disable them in vector.md in order









not to generate instructions that xtheadvector does not support,









like vmv1r and vsext.vf2.

















gcc/ChangeLog:

















* config.gcc:  Add files for XTheadVector intrinsics.









* config/riscv/autovec.md: Guard XTheadVector.









* config/riscv/riscv-string.cc (expand_block_move):









Guard XTheadVector.









* config/riscv/riscv-v.cc (legitimize_move):









New expansion.









(get_prefer_tail_policy): Give specific value for tail.









(get_prefer_mask_policy): Give specific value for mask.









(vls_mode_valid_p): Avoid autovec.









* config/riscv/riscv-vector-builtins-shapes.cc (check_type):









(build_one): New function.









* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):









(DEF_THEAD_RVV_FUNCTION): Add new marcos.









(check_required_extensions):









(handle_pragma_vector):









* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):









(RVV_REQUIRE_XTHEADVECTOR):









Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.









(struct function_group_info):









* config/riscv/riscv-vector-switch.def (ENTRY):









Disable fractional mode for the XTheadVector extension.









(TUPLE_ENTRY): Likewise.









* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.









* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):









Guard XTheadVector.









(riscv_v_adjust_bytesize): Likewise.









(riscv_preferred_simd_mode): Likewsie.









(riscv_autovectorize_vector_modes): Likewise.









(riscv_vector_mode_supported_any_target_p): Likewise.









(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.









* config/riscv/t-riscv: Add new files.









* config/riscv/vector-iterators.md: Remove fractional LMUL.









* config/riscv/vector.md: Include thead-vector.md.









* config/riscv/riscv_th_vector.h: New file.









* config/riscv/thead-vector-builtins-functions.def: New file.









* config/riscv/thead-vector-builtins.cc: New file.









* config/riscv/thead-vector-builtins.h: New file.









* config/riscv/thead-vector.md: New file.

















gcc/testsuite/ChangeLog:

















* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.









* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.









* lib/target-supports.exp: Add target for XTheadVector.

















Co-authored-by: Jin Ma <jinma@linux.alibaba.com>









Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>









Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>









---









gcc/config.gcc                                |    4 +-









gcc/config/riscv/autovec.md                   |    2 +-









gcc/config/riscv/predicates.md                |    8 +-









gcc/config/riscv/riscv-string.cc              |    3 +









gcc/config/riscv/riscv-v.cc                   |   13 +-









.../riscv/riscv-vector-builtins-shapes.cc     |   23 +









gcc/config/riscv/riscv-vector-builtins.cc     |    7 +









gcc/config/riscv/riscv-vector-builtins.h      |    5 +-









gcc/config/riscv/riscv-vector-switch.def      |  150 +-









gcc/config/riscv/riscv.cc                     |   20 +-









gcc/config/riscv/riscv_th_vector.h            |   49 +









gcc/config/riscv/t-riscv                      |   16 +









.../riscv/thead-vector-builtins-functions.def |  627 ++++









gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++









gcc/config/riscv/thead-vector-builtins.h      |   92 +









gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++









gcc/config/riscv/vector-iterators.md          |  186 +-









gcc/config/riscv/vector.md                    |   36 +-









.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-









.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-









gcc/testsuite/lib/target-supports.exp         |   12 +









21 files changed, 4386 insertions(+), 191 deletions(-)









create mode 100644 gcc/config/riscv/riscv_th_vector.h









create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def









create mode 100644 gcc/config/riscv/thead-vector-builtins.cc









create mode 100644 gcc/config/riscv/thead-vector-builtins.h









create mode 100644 gcc/config/riscv/thead-vector.md

















diff --git a/gcc/config.gcc b/gcc/config.gcc









index f0676c830e8..4478395ab77 100644









--- a/gcc/config.gcc









+++ b/gcc/config.gcc









@@ -547,9 +547,9 @@ riscv*)









extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"









extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"









extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"









- extra_objs="${extra_objs} thead.o riscv-target-attr.o"









+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"









d_target_objs="riscv-d.o"









- extra_headers="riscv_vector.h"









+ extra_headers="riscv_vector.h riscv_th_vector.h"









target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"









target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"









;;









diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md









index 8b8a92f10a1..1fac56c7095 100644









--- a/gcc/config/riscv/autovec.md









+++ b/gcc/config/riscv/autovec.md









@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"









  [(match_operand      0 "register_operand")









    (match_operand      1 "memory_operand")









    (match_operand:ANYI 2 "const_int_operand")]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  {









    riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],









  operands[2]);









diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md









index 1a3a4f1ecbb..d910367e59c 100644









--- a/gcc/config/riscv/predicates.md









+++ b/gcc/config/riscv/predicates.md









@@ -64,8 +64,9 @@ (define_predicate "csr_operand"









        (match_operand 0 "register_operand")))









(define_predicate "vector_csr_operand"









-  (ior (match_operand 0 "const_csr_operand")









-       (match_operand 0 "register_operand")))









+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")









+      (match_operand 0 "const_csr_operand"))









+    (match_operand 0 "register_operand")))









;; V has 32-bit unsigned immediates.  This happens to be the same constraint as









;; the csr_operand, but it's not CSR related.









@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"









;; Predicates for the V extension.









(define_special_predicate "vector_length_operand"









  (ior (match_operand 0 "pmode_register_operand")









-       (match_operand 0 "const_csr_operand")))









+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")









+    (match_operand 0 "const_csr_operand"))))









(define_special_predicate "autovec_length_operand"









  (ior (match_operand 0 "pmode_register_operand")









diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc









index 11c1f74d0b3..ec8f3486fd8 100644









--- a/gcc/config/riscv/riscv-string.cc









+++ b/gcc/config/riscv/riscv-string.cc









@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)









bnez a2, loop                   # Any more?









ret                             # Return









  */









+   if (TARGET_XTHEADVECTOR)









+    return false;









+









  gcc_assert (TARGET_VECTOR);









  HOST_WIDE_INT potential_ew









diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc









index 486f5deb296..710332e17db 100644









--- a/gcc/config/riscv/riscv-v.cc









+++ b/gcc/config/riscv/riscv-v.cc









@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)









      return true;









    }









+  if (TARGET_XTHEADVECTOR)









+      {









+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,









+   RVV_VLMAX, GEN_INT(VLMAX)));









+ return true;









+      }









+









  if (riscv_v_ext_vls_mode_p (mode))









    {









      if (GET_MODE_NUNITS (mode).to_constant () <= 31)









@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()









      compiler pick up either agnostic or undisturbed. Maybe we









      will have a compile option like -mprefer=agnostic to set









      this value???.  */









-  return TAIL_ANY;









+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;









}









/* Get prefer mask policy.  */









@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()









      compiler pick up either agnostic or undisturbed. Maybe we









      will have a compile option like -mprefer=agnostic to set









      this value???.  */









-  return MASK_ANY;









+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;









}









/* Get avl_type rtx.  */









@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)









bool









vls_mode_valid_p (machine_mode vls_mode)









{









-  if (!TARGET_VECTOR)









+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)









    return false;









  if (riscv_autovec_preference == RVV_SCALABLE)









diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc









index 4a754e0228f..6b49404a1fa 100644









--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc









+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc









@@ -33,6 +33,25 @@









namespace riscv_vector {









+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are









+   valid for the function.  */









+









+static bool









+check_type (tree return_type, vec<tree> &argument_types)









+{









+  tree arg;









+  unsigned i;









+









+  if (!return_type)









+    return false;









+









+  FOR_EACH_VEC_ELT (argument_types, i, arg)









+    if (!arg)









+      return false;









+









+  return true;









+}









+









/* Add one function instance for GROUP, using operand suffix at index OI,









    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */









static void









@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,









    group.ops_infos.types[vec_type_idx].index);









  b.allocate_argument_types (function_instance, argument_types);









  b.apply_predication (function_instance, return_type, argument_types);









+









+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))









+    return;









+









  b.add_overloaded_function (function_instance, *group.shape);









  b.add_unique_function (function_instance, (*group.shape), return_type,









argument_types);









diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc









index 4e2c66c2de7..f5f9000d89c 100644









--- a/gcc/config/riscv/riscv-vector-builtins.cc









+++ b/gcc/config/riscv/riscv-vector-builtins.cc









@@ -51,6 +51,7 @@









#include "riscv-vector-builtins.h"









#include "riscv-vector-builtins-shapes.h"









#include "riscv-vector-builtins-bases.h"









+#include "thead-vector-builtins.h"









using namespace riscv_vector;









@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {









#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \









  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},









#include "riscv-vector-builtins-functions.def"









+#undef DEF_RVV_FUNCTION









+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \









+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},









+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \









+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},









+#include "thead-vector-builtins-functions.def"









};









/* The RVV types, with their built-in









diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h









index 4f38c09d73d..bb463510dd2 100644









--- a/gcc/config/riscv/riscv-vector-builtins.h









+++ b/gcc/config/riscv/riscv-vector-builtins.h









@@ -123,6 +123,7 @@ enum required_ext









  ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */









  ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */









  ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */









+  XTHEADVECTOR_EXT,   /* XTheadVector extension */









};









/* Enumerates the RVV operand types.  */









@@ -233,7 +234,7 @@ struct function_group_info









    switch (ext_value)









    {









      case VECTOR_EXT:









-        return TARGET_VECTOR;









+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);









      case ZVBB_EXT:









        return TARGET_ZVBB;









      case ZVBB_OR_ZVKB_EXT:









@@ -252,6 +253,8 @@ struct function_group_info









        return TARGET_ZVKSED;









      case ZVKSH_EXT:









        return TARGET_ZVKSH;









+      case XTHEADVECTOR_EXT:









+ return TARGET_XTHEADVECTOR;









      default:









        gcc_unreachable ();









    }









diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def









index 5c9f9bcbc3e..f7a66b34bae 100644









--- a/gcc/config/riscv/riscv-vector-switch.def









+++ b/gcc/config/riscv/riscv-vector-switch.def









@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.









#endif









/* Disable modes if TARGET_MIN_VLEN == 32.  */









-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)









-ENTRY (RVVMF32BI, true, LMUL_F4, 32)









-ENTRY (RVVMF16BI, true, LMUL_F2, 16)









+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)









+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)









+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)









ENTRY (RVVMF8BI, true, LMUL_1, 8)









ENTRY (RVVMF4BI, true, LMUL_2, 4)









ENTRY (RVVMF2BI, true, LMUL_4, 2)









@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)









ENTRY (RVVM4QI, true, LMUL_4, 2)









ENTRY (RVVM2QI, true, LMUL_2, 4)









ENTRY (RVVM1QI, true, LMUL_1, 8)









-ENTRY (RVVMF2QI, true, LMUL_F2, 16)









-ENTRY (RVVMF4QI, true, LMUL_F4, 32)









-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)









+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)









+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)









+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)









/* Disable modes if TARGET_MIN_VLEN == 32.  */









ENTRY (RVVM8HI, true, LMUL_8, 2)









ENTRY (RVVM4HI, true, LMUL_4, 4)









ENTRY (RVVM2HI, true, LMUL_2, 8)









ENTRY (RVVM1HI, true, LMUL_1, 16)









-ENTRY (RVVMF2HI, true, LMUL_F2, 32)









-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)









+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)









+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)









/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */









ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)









ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)









ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)









ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)









-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)









-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)









+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)









+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)









/* Disable modes if TARGET_MIN_VLEN == 32.  */









ENTRY (RVVM8SI, true, LMUL_8, 4)









ENTRY (RVVM4SI, true, LMUL_4, 8)









ENTRY (RVVM2SI, true, LMUL_2, 16)









ENTRY (RVVM1SI, true, LMUL_1, 32)









-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)









+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)









/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */









ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)









ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)









ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)









ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)









-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)









+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)









/* Disable modes if !TARGET_VECTOR_ELEN_64.  */









ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)









@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)









#endif









TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)









TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)









TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)









TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)









TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)









TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)









TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)









TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)









-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)









-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)









-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)









+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)









+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)









+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)









TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)









TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)









TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)









TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)









-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)









+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)









TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)









TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)









TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)









TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)









TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)









TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)









-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)









+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)









TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)









TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)









diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc









index d3010bed8d8..18cc64b63e6 100644









--- a/gcc/config/riscv/riscv.cc









+++ b/gcc/config/riscv/riscv.cc









@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)









{









  if (riscv_v_ext_vector_mode_p (mode))









    {









+      if (TARGET_XTHEADVECTOR)









+ return BYTES_PER_RISCV_VECTOR;









+









      poly_int64 nunits = GET_MODE_NUNITS (mode);









      poly_int64 mode_size = GET_MODE_SIZE (mode);









@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)









static machine_mode









riscv_preferred_simd_mode (scalar_mode mode)









{









-  if (TARGET_VECTOR)









+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)









    return riscv_vector::preferred_simd_mode (mode);









  return word_mode;









@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)









unsigned int









riscv_autovectorize_vector_modes (vector_modes *modes, bool all)









{









-  if (TARGET_VECTOR)









+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)









    return riscv_vector::autovectorize_vector_modes (modes, all);









  return default_autovectorize_vector_modes (modes, all);









@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)









  return false;









}









+/* Implements target hook vector_mode_supported_any_target_p.  */









+









+static bool









+riscv_vector_mode_supported_any_target_p (machine_mode mode)









+{









+  if (TARGET_XTHEADVECTOR)









+    return false;









+  return true;









+}









+









/* Initialize the GCC target structure.  */









#undef TARGET_ASM_ALIGNED_HI_OP









#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"









@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)









#undef TARGET_PREFERRED_ELSE_VALUE









#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value









+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P









+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p









+









struct gcc_target targetm = TARGET_INITIALIZER;









#include "gt-riscv.h"









diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h









new file mode 100644









index 00000000000..6f47e0c90a4









--- /dev/null









+++ b/gcc/config/riscv/riscv_th_vector.h









@@ -0,0 +1,49 @@









+/* RISC-V 'XTheadVector' Extension intrinsics include file.









+   Copyright (C) 2022-2023 Free Software Foundation, Inc.









+









+   This file is part of GCC.









+









+   GCC is free software; you can redistribute it and/or modify it









+   under the terms of the GNU General Public License as published









+   by the Free Software Foundation; either version 3, or (at your









+   option) any later version.









+









+   GCC is distributed in the hope that it will be useful, but WITHOUT









+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY









+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public









+   License for more details.









+









+   Under Section 7 of GPL version 3, you are granted additional









+   permissions described in the GCC Runtime Library Exception, version









+   3.1, as published by the Free Software Foundation.









+









+   You should have received a copy of the GNU General Public License and









+   a copy of the GCC Runtime Library Exception along with this program;









+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see









+   <http://www.gnu.org/licenses/>.  */









+









+#ifndef __RISCV_TH_VECTOR_H









+#define __RISCV_TH_VECTOR_H









+









+#include <stdint.h>









+#include <stddef.h>









+









+#ifndef __riscv_xtheadvector









+#error "XTheadVector intrinsics require the xtheadvector extension."









+#else









+#ifdef __cplusplus









+extern "C" {









+#endif









+









+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does









+   not define the RVV types and intrinsic functions directly in C and C++









+   code, but instead uses the following pragma to tell GCC to insert the









+   necessary type and function definitions itself.  The net effect is the









+   same, and the file is a complete implementation of riscv_th_vector.h.  */









+#pragma riscv intrinsic "vector"









+









+#ifdef __cplusplus









+}









+#endif // __cplusplus









+#endif // __riscv_xtheadvector









+#endif // __RISCV_TH_ECTOR_H









diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv









index 067771e3c97..09512092056 100644









--- a/gcc/config/riscv/t-riscv









+++ b/gcc/config/riscv/t-riscv









@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \









  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \









  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \









  $(srcdir)/config/riscv/riscv-vector-builtins-types.def \









+  $(srcdir)/config/riscv/thead-vector-builtins.h \









+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \









  $(RISCV_BUILTINS_H)









$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









$(srcdir)/config/riscv/riscv-vector-builtins.cc









@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \









$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc









+thead-vector-builtins.o: \









+  $(srcdir)/config/riscv/thead-vector-builtins.cc \









+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \









+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \









+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \









+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \









+  rtx-vector-builder.h \









+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \









+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \









+  $(srcdir)/config/riscv/thead-vector-builtins.h \









+  $(RISCV_BUILTINS_H)









+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









+ $(srcdir)/config/riscv/thead-vector-builtins.cc









+









riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \









  $(SYSTEM_H) $(TM_H)









$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \









diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def









new file mode 100644









index 00000000000..a85ca24cb31









--- /dev/null









+++ b/gcc/config/riscv/thead-vector-builtins-functions.def









@@ -0,0 +1,627 @@









+#ifndef DEF_RVV_FUNCTION









+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)









+#endif









+









+#ifndef DEF_THEAD_RVV_FUNCTION









+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)









+#endif









+









+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT









+/* Internal helper functions for gimple fold use.  */









+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)









+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)









+









+/* 6. Configuration-Setting Instructions.  */









+









+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)









+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)









+









+/* 7. Vector Loads and Stores. */









+









+// 7.4. Vector Unit-Stride Instructions









+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)









+









+// 7.5. Vector Strided Instructions









+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)









+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)









+









+// 7.6. Vector Indexed Instructions









+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)









+









+// 7.7. Unit-stride Fault-Only-First Loads









+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)









+









+// TODO: 7.8. Vector Load/Store Segment Instructions









+









+/* 11. Vector Integer Arithmetic Instructions.  */









+









+// 11.1. Vector Single-Width Integer Add and Subtract









+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)









+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)









+









+// 11.2. Vector Widening Integer Add/Subtract









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)









+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)









+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)









+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)









+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)









+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)









+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)









+









+// 11.3. Vector Integer Extension









+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)









+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)









+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)









+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)









+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)









+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)









+









+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions









+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)









+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)









+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)









+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)









+









+// 11.5. Vector Bitwise Logical Instructions









+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)









+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)









+









+// 11.6. Vector Single-Width Shift Instructions









+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)









+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)









+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)









+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)









+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)









+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)









+









+// 11.7. Vector Narrowing Integer Right Shift Instructions









+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)









+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)









+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)









+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)









+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)









+









+// 11.8. Vector Integer Compare Instructions









+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)









+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)









+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)









+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)









+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)









+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)









+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)









+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)









+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)









+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)









+









+// 11.9. Vector Integer Min/Max Instructions









+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)









+









+// 11.10. Vector Single-Width Integer Multiply Instructions









+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)









+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)









+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)









+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)









+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)









+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)









+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)









+









+// 11.11. Vector Integer Divide Instructions









+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)









+









+// 11.12. Vector Widening Integer Multiply Instructions









+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)









+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)









+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)









+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)









+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)









+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)









+









+// 11.13. Vector Single-Width Integer Multiply-Add Instructions









+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)









+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)









+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)









+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)









+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)









+









+// 11.14. Vector Widening Integer Multiply-Add Instructions









+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)









+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)









+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)









+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)









+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)









+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)









+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)









+









+// 11.15. Vector Integer Merge Instructions









+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)









+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)









+









+// 11.16 Vector Integer Move Instructions









+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)









+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)









+









+/* 12. Vector Fixed-Point Arithmetic Instructions. */









+









+// 12.1. Vector Single-Width Saturating Add and Subtract









+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)









+









+// 12.2. Vector Single-Width Averaging Add and Subtract









+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)









+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)









+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)









+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)









+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)









+









+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation









+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)









+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)









+









+// 12.4. Vector Single-Width Scaling Shift Instructions









+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)









+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)









+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)









+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)









+









+// 12.5. Vector Narrowing Fixed-Point Clip Instructions









+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)









+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)









+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)









+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)









+









+/* 13. Vector Floating-Point Instructions.  */









+









+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions









+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)









+









+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)









+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)









+









+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions









+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)









+









+// 13.5. Vector Widening Floating-Point Multiply









+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)









+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)









+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)









+









+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions









+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)









+









+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)









+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)









+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)









+









+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions









+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)









+









+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)









+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)









+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)









+









+// 13.8. Vector Floating-Point Square-Root Instruction









+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)









+









+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)









+









+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction









+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)









+









+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction









+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)









+









+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)









+









+// 13.11. Vector Floating-Point MIN/MAX Instructions









+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)









+









+// 13.12. Vector Floating-Point Sign-Injection Instructions









+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)









+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)









+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)









+









+// 13.13. Vector Floating-Point Compare Instructions









+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)









+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)









+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)









+









+// 13.14. Vector Floating-Point Classify Instruction









+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)









+









+// 13.15. Vector Floating-Point Merge Instruction









+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)









+









+// 13.16. Vector Floating-Point Move Instruction









+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)









+









+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions









+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)









+









+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)









+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)









+









+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions









+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)









+









+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)









+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)









+









+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions









+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)









+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)









+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)









+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)









+









+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)









+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)









+









+/* 14. Vector Reduction Operations.  */









+









+// 14.1. Vector Single-Width Integer Reduction Instructions









+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)









+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)









+









+// 14.2. Vector Widening Integer Reduction Instructions









+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)









+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)









+









+// 14.3. Vector Single-Width Floating-Point Reduction Instructions









+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)









+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)









+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)









+









+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)









+









+// 14.4. Vector Widening Floating-Point Reduction Instructions









+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)









+









+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)









+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)









+









+/* 15. Vector Mask Instructions.  */









+









+// 15.1. Vector Mask-Register Logical Instructions









+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)









+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)









+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)









+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)









+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)









+// 15.2. Vector count population in mask vcpop.m









+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)









+// 15.3. vfirst find-first-set mask bit









+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)









+// 15.4. vmsbf.m set-before-first mask bit









+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)









+// 15.5. vmsif.m set-including-first mask bit









+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)









+// 15.6. vmsof.m set-only-first mask bit









+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)









+// 15.8. Vector Iota Instruction









+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)









+// 15.9. Vector Element Index Instruction









+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)









+









+/* 16. Vector Permutation Instructions.  */









+









+// 16.1. Integer Scalar Move Instructions









+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)









+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)









+









+// 16.2. Floating-Point Scalar Move Instructions









+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)









+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)









+









+// 16.3. Vector Slide Instructions









+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)









+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)









+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)









+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)









+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)









+









+// 16.4. Vector Register Gather Instructions









+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)









+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)









+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)









+









+// 16.5. Vector Compress Instruction









+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)









+









+/* Miscellaneous Vector Functions.  */









+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)









+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)









+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)









+









+// Tuple types









+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)









+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)









+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)









+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)









+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)









+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)









+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)









+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)









+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)









+#undef REQUIRED_EXTENSIONS









+









+#undef DEF_RVV_FUNCTION









+#undef DEF_THEAD_RVV_FUNCTION









\ No newline at end of file









diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc









new file mode 100644









index 00000000000..9d84ed39937









--- /dev/null









+++ b/gcc/config/riscv/thead-vector-builtins.cc









@@ -0,0 +1,746 @@









+/* function_base implementation for RISC-V XTheadVector Extension









+   for GNU compiler.









+   Copyright (C) 2022-2023 Free Software Foundation, Inc.









+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head









+   Semiconductor Co., Ltd.









+









+   This file is part of GCC.









+









+   GCC is free software; you can redistribute it and/or modify it









+   under the terms of the GNU General Public License as published by









+   the Free Software Foundation; either version 3, or (at your option)









+   any later version.









+









+   GCC is distributed in the hope that it will be useful, but









+   WITHOUT ANY WARRANTY; without even the implied warranty of









+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU









+   General Public License for more details.









+









+   You should have received a copy of the GNU General Public License









+   along with GCC; see the file COPYING3.  If not see









+   <http://www.gnu.org/licenses/>.  */









+









+#include "config.h"









+#include "system.h"









+#include "coretypes.h"









+#include "tm.h"









+#include "tree.h"









+#include "rtl.h"









+#include "tm_p.h"









+#include "memmodel.h"









+#include "insn-codes.h"









+#include "optabs.h"









+#include "recog.h"









+#include "expr.h"









+#include "basic-block.h"









+#include "function.h"









+#include "fold-const.h"









+#include "gimple.h"









+#include "gimple-iterator.h"









+#include "gimplify.h"









+#include "explow.h"









+#include "emit-rtl.h"









+#include "tree-vector-builder.h"









+#include "rtx-vector-builder.h"









+#include "riscv-vector-builtins.h"









+#include "riscv-vector-builtins-shapes.h"









+#include "riscv-vector-builtins-bases.h"









+#include "thead-vector-builtins.h"









+









+using namespace riscv_vector;









+









+namespace riscv_vector {









+









+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */









+template<bool VLMAX_P>









+class th_vsetvl : public function_base









+{









+public:









+  bool apply_vl_p () const override









+  {









+    return false;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    if (VLMAX_P)









+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));









+    else









+      e.add_input_operand (0);









+









+    tree type = builtin_types[e.type.index].vector;









+    machine_mode mode = TYPE_MODE (type);









+









+    machine_mode inner_mode = GET_MODE_INNER (mode);









+    /* SEW.  */









+    e.add_input_operand (Pmode,









+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));









+









+    /* LMUL.  */









+    e.add_input_operand (Pmode,









+      gen_int_mode (get_vlmul (mode), Pmode));









+









+    /* TAIL_ANY.  */









+    e.add_input_operand (Pmode,









+ gen_int_mode (get_prefer_tail_policy (), Pmode));









+









+    /* MASK_ANY.  */









+    e.add_input_operand (Pmode,









+ gen_int_mode (get_prefer_mask_policy (), Pmode));









+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));









+  }









+};









+









+/* Implements









+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v









+ * codegen.  */









+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>









+class th_loadstore : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return !STORE_P; }









+  bool apply_mask_policy_p () const override { return !STORE_P; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    if (STORE_P)









+      return CP_WRITE_MEMORY;









+    else









+      return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    if (STORE_P || LST_TYPE == LST_INDEXED)









+      return true;









+    return pred != PRED_TYPE_none;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    if (LST_TYPE == LST_INDEXED)









+      {









+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;









+ if (STORE_P)









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_store (unspec, e.vector_mode (),









+       e.index_mode ()));









+ else









+   {









+     unsigned src_eew_bitsize









+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));









+     unsigned dst_eew_bitsize









+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));









+     if (dst_eew_bitsize == src_eew_bitsize)









+       {









+ return e.use_exact_insn (









+   code_for_pred_th_indexed_load_same_eew (









+     unspec, e.vector_mode ()));









+       }









+     else if (dst_eew_bitsize > src_eew_bitsize)









+       {









+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;









+ switch (factor)









+   {









+   case 2:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x2_greater_eew (









+ unspec, e.vector_mode ()));









+   case 4:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x4_greater_eew (









+ unspec, e.vector_mode ()));









+   case 8:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x8_greater_eew (









+ unspec, e.vector_mode ()));









+   default:









+     gcc_unreachable ();









+   }









+       }









+     else









+       {









+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;









+ switch (factor)









+   {









+   case 2:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x2_smaller_eew (









+ unspec, e.vector_mode ()));









+   case 4:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x4_smaller_eew (









+ unspec, e.vector_mode ()));









+   case 8:









+     return e.use_exact_insn (









+       code_for_pred_th_indexed_load_x8_smaller_eew (









+ unspec, e.vector_mode ()));









+   default:









+     gcc_unreachable ();









+   }









+       }









+   }









+      }









+    else if (LST_TYPE == LST_STRIDED)









+      {









+ if (STORE_P)









+   return e.use_contiguous_store_insn (









+     code_for_pred_th_strided_store (e.vector_mode ()));









+ else









+   return e.use_contiguous_load_insn (









+     code_for_pred_th_strided_load (e.vector_mode ()));









+      }









+    else









+      {









+ if (STORE_P)









+   return e.use_contiguous_store_insn (









+     code_for_pred_th_store (e.vector_mode ()));









+ else









+   return e.use_contiguous_load_insn (









+     code_for_pred_mov (e.vector_mode ()));









+      }









+  }









+};









+









+/* Implements vneg/vnot.  */









+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>









+class th_unop : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));









+  }









+};









+









+/* Implements vnsrl/vnsra.  */









+template<rtx_code CODE>









+class th_vnshift : public function_base









+{









+public:









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_wx:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));









+      case OP_TYPE_wv:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow (CODE, e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vncvt.  */









+class th_vncvt_x : public function_base









+{









+public:









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_trunc (e.vector_mode ()));









+  }









+};









+









+/* Implements vnclip/vnclipu.  */









+template<int UNSPEC>









+class th_vnclip : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override { return true; }









+









+  bool may_require_vxrm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_wx:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));









+      case OP_TYPE_wv:









+ return e.use_exact_insn (









+   code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vcpop.  */









+class th_vcpop : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_popcount (e.vector_mode (), Pmode));









+  }









+};









+









+/* Implements vfirst.  */









+class th_vfirst : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_ffs (e.vector_mode (), Pmode));









+  }









+};









+









+/* Implements vmadc.  */









+class th_vmadc : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool use_mask_predication_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_vvm:









+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));









+      case OP_TYPE_vxm:









+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));









+      case OP_TYPE_vv:









+ return e.use_exact_insn (









+   code_for_pred_th_madc_overflow (e.vector_mode ()));









+      case OP_TYPE_vx:









+ return e.use_exact_insn (









+   code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vmsbc.  */









+class th_vmsbc : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+  bool use_mask_predication_p () const override { return false; }









+  bool has_merge_operand_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    switch (e.op_info->op)









+      {









+      case OP_TYPE_vvm:









+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));









+      case OP_TYPE_vxm:









+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));









+      case OP_TYPE_vv:









+ return e.use_exact_insn (









+   code_for_pred_th_msbc_overflow (e.vector_mode ()));









+      case OP_TYPE_vx:









+ return e.use_exact_insn (









+   code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));









+      default:









+ gcc_unreachable ();









+      }









+  }









+};









+









+/* Implements vfncvt.x.  */









+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>









+class th_vfncvt_x : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));









+  }









+};









+









+template<enum frm_op_type FRM_OP = NO_FRM>









+class th_vfncvt_f : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  rtx expand (function_expander &e) const override









+  {









+    if (e.op_info->op == OP_TYPE_f_w)









+      return e.use_exact_insn (









+ code_for_pred_th_trunc (e.vector_mode ()));









+    if (e.op_info->op == OP_TYPE_x_w)









+      return e.use_exact_insn (









+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));









+    if (e.op_info->op == OP_TYPE_xu_w)









+      return e.use_exact_insn (









+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));









+    gcc_unreachable ();









+  }









+};









+









+/* Implements floating-point reduction instructions.  */









+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>









+class th_freducop : public function_base









+{









+public:









+  bool has_rounding_mode_operand_p () const override









+  {









+    return FRM_OP == HAS_FRM;









+  }









+









+  bool may_require_frm_p () const override { return true; }









+









+  bool apply_mask_policy_p () const override { return false; }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));









+  }









+};









+









+class th_vleff : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY | CP_WRITE_CSR;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  gimple *fold (gimple_folder &f) const override









+  {









+    return fold_fault_load (f);









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_contiguous_load_insn (









+      code_for_pred_th_fault_load (e.vector_mode ()));









+  }









+};









+









+/* Implements vlseg.v.  */









+class th_vlseg : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_unit_strided_load (e.vector_mode ()));









+  }









+};









+









+/* Implements vsseg.v.  */









+class th_vsseg : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_WRITE_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_unit_strided_store (e.vector_mode ()));









+  }









+};









+









+/* Implements vlsseg.v.  */









+class th_vlsseg : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_strided_load (e.vector_mode ()));









+  }









+};









+









+/* Implements vssseg.v.  */









+class th_vssseg : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_WRITE_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_strided_store (e.vector_mode ()));









+  }









+};









+









+template<int UNSPEC>









+class th_seg_indexed_load : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_indexed_load (









+ UNSPEC, e.vector_mode (), e.index_mode ()));









+  }









+};









+









+template<int UNSPEC>









+class th_seg_indexed_store : public function_base









+{









+public:









+  bool apply_tail_policy_p () const override { return false; }









+  bool apply_mask_policy_p () const override { return false; }









+









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_WRITE_MEMORY;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index) const override









+  {









+    return true;









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_indexed_store (









+ UNSPEC, e.vector_mode (), e.index_mode ()));









+  }









+};









+









+/* Implements vlsegff.v.  */









+class th_vlsegff : public function_base









+{









+public:









+  unsigned int call_properties (const function_instance &) const override









+  {









+    return CP_READ_MEMORY | CP_WRITE_CSR;









+  }









+









+  bool can_be_overloaded_p (enum predication_type_index pred) const override









+  {









+    return pred != PRED_TYPE_none;









+  }









+









+  gimple *fold (gimple_folder &f) const override









+  {









+    return fold_fault_load (f);









+  }









+









+  rtx expand (function_expander &e) const override









+  {









+    return e.use_exact_insn (









+      code_for_pred_th_fault_load (e.vector_mode ()));









+  }









+};









+









+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;









+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;









+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;









+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;









+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;









+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;









+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;









+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;









+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;









+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;









+static CONSTEXPR const th_unop<NEG> th_vneg_obj;









+static CONSTEXPR const th_unop<NOT> th_vnot_obj;









+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;









+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;









+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;









+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;









+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;









+static CONSTEXPR const th_vcpop th_vcpop_obj;









+static CONSTEXPR const th_vfirst th_vfirst_obj;









+static CONSTEXPR const th_vmadc th_vmadc_obj;









+static CONSTEXPR const th_vmsbc th_vmsbc_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;









+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;









+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;









+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;









+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;









+static CONSTEXPR const th_vleff th_vleff_obj;









+static CONSTEXPR const th_vlseg th_vlseg_obj;









+static CONSTEXPR const th_vsseg th_vsseg_obj;









+static CONSTEXPR const th_vlsseg th_vlsseg_obj;









+static CONSTEXPR const th_vssseg th_vssseg_obj;









+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;









+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;









+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;









+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;









+static CONSTEXPR const th_vlsegff th_vlsegff_obj;









+









+/* Declare the function base NAME, pointing it to an instance









+   of class <NAME>_obj.  */









+#define BASE(NAME) \









+  namespace bases { const function_base *const NAME = &NAME##_obj; }









+









+BASE (th_vsetvl)









+BASE (th_vsetvlmax)









+BASE (th_vle)









+BASE (th_vse)









+BASE (th_vlm)









+BASE (th_vsm)









+BASE (th_vlse)









+BASE (th_vsse)









+BASE (th_vluxei8)









+BASE (th_vluxei16)









+BASE (th_vluxei32)









+BASE (th_vluxei64)









+BASE (th_vloxei8)









+BASE (th_vloxei16)









+BASE (th_vloxei32)









+BASE (th_vloxei64)









+BASE (th_vsuxei8)









+BASE (th_vsuxei16)









+BASE (th_vsuxei32)









+BASE (th_vsuxei64)









+BASE (th_vsoxei8)









+BASE (th_vsoxei16)









+BASE (th_vsoxei32)









+BASE (th_vsoxei64)









+BASE (th_vneg)









+BASE (th_vnot)









+BASE (th_vnsrl)









+BASE (th_vnsra)









+BASE (th_vncvt_x)









+BASE (th_vnclip)









+BASE (th_vnclipu)









+BASE (th_vcpop)









+BASE (th_vfirst)









+BASE (th_vmadc)









+BASE (th_vmsbc)









+BASE (th_vfncvt_x)









+BASE (th_vfncvt_x_frm)









+BASE (th_vfncvt_xu)









+BASE (th_vfncvt_xu_frm)









+BASE (th_vfncvt_f)









+BASE (th_vfncvt_f_frm)









+BASE (th_vfredusum)









+BASE (th_vfredusum_frm)









+BASE (th_vfredosum)









+BASE (th_vfredosum_frm)









+BASE (th_vfwredusum)









+BASE (th_vfwredusum_frm)









+BASE (th_vfwredosum)









+BASE (th_vfwredosum_frm)









+BASE (th_vleff)









+BASE (th_vlseg)









+BASE (th_vsseg)









+BASE (th_vlsseg)









+BASE (th_vssseg)









+BASE (th_vluxseg)









+BASE (th_vloxseg)









+BASE (th_vsuxseg)









+BASE (th_vsoxseg)









+BASE (th_vlsegff)









+









+} // end namespace riscv_vector









diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h









new file mode 100644









index 00000000000..d0bf00b8e81









--- /dev/null









+++ b/gcc/config/riscv/thead-vector-builtins.h









@@ -0,0 +1,92 @@









+/* function_base declaration for RISC-V XTheadVector Extension









+   for GNU compiler.









+   Copyright (C) 2022-2023 Free Software Foundation, Inc.









+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head









+   Semiconductor Co., Ltd.









+









+   This file is part of GCC.









+









+   GCC is free software; you can redistribute it and/or modify it









+   under the terms of the GNU General Public License as published by









+   the Free Software Foundation; either version 3, or (at your option)









+   any later version.









+









+   GCC is distributed in the hope that it will be useful, but









+   WITHOUT ANY WARRANTY; without even the implied warranty of









+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU









+   General Public License for more details.









+









+   You should have received a copy of the GNU General Public License









+   along with GCC; see the file COPYING3.  If not see









+   <http://www.gnu.org/licenses/>.  */









+









+#ifndef GCC_THEAD_VECTOR_BUILTINS_H









+#define GCC_THEAD_VECTOR_BUILTINS_H









+









+namespace riscv_vector {









+









+namespace bases {









+extern const function_base *const th_vsetvl;









+extern const function_base *const th_vsetvlmax;









+extern const function_base *const th_vle;









+extern const function_base *const th_vse;









+extern const function_base *const th_vlm;









+extern const function_base *const th_vsm;









+extern const function_base *const th_vlse;









+extern const function_base *const th_vsse;









+extern const function_base *const th_vluxei8;









+extern const function_base *const th_vluxei16;









+extern const function_base *const th_vluxei32;









+extern const function_base *const th_vluxei64;









+extern const function_base *const th_vloxei8;









+extern const function_base *const th_vloxei16;









+extern const function_base *const th_vloxei32;









+extern const function_base *const th_vloxei64;









+extern const function_base *const th_vsuxei8;









+extern const function_base *const th_vsuxei16;









+extern const function_base *const th_vsuxei32;









+extern const function_base *const th_vsuxei64;









+extern const function_base *const th_vsoxei8;









+extern const function_base *const th_vsoxei16;









+extern const function_base *const th_vsoxei32;









+extern const function_base *const th_vsoxei64;









+extern const function_base *const th_vneg;









+extern const function_base *const th_vnot;









+extern const function_base *const th_vnsrl;









+extern const function_base *const th_vnsra;









+extern const function_base *const th_vncvt_x;









+extern const function_base *const th_vnclip;









+extern const function_base *const th_vnclipu;









+extern const function_base *const th_vcpop;









+extern const function_base *const th_vfirst;









+extern const function_base *const th_vmadc;









+extern const function_base *const th_vmsbc;









+extern const function_base *const th_vfncvt_x;









+extern const function_base *const th_vfncvt_x_frm;









+extern const function_base *const th_vfncvt_xu;









+extern const function_base *const th_vfncvt_xu_frm;









+extern const function_base *const th_vfncvt_f;









+extern const function_base *const th_vfncvt_f_frm;









+extern const function_base *const th_vfredusum;









+extern const function_base *const th_vfredusum_frm;









+extern const function_base *const th_vfredosum;









+extern const function_base *const th_vfredosum_frm;









+extern const function_base *const th_vfwredusum;









+extern const function_base *const th_vfwredusum_frm;









+extern const function_base *const th_vfwredosum;









+extern const function_base *const th_vfwredosum_frm;









+extern const function_base *const th_vleff;









+extern const function_base *const th_vlseg;









+extern const function_base *const th_vsseg;









+extern const function_base *const th_vlsseg;









+extern const function_base *const th_vssseg;









+extern const function_base *const th_vluxseg;









+extern const function_base *const th_vloxseg;









+extern const function_base *const th_vsuxseg;









+extern const function_base *const th_vsoxseg;









+extern const function_base *const th_vlsegff;









+}









+









+} // end namespace riscv_vector









+









+#endif









diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md









new file mode 100644









index 00000000000..072fb5e68e1









--- /dev/null









+++ b/gcc/config/riscv/thead-vector.md









@@ -0,0 +1,2574 @@









+(define_c_enum "unspec" [









+  UNSPEC_TH_VWLDST









+])









+









+(define_int_attr th_order [









+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")









+])









+









+(define_int_attr th_reduc_op [









+  (UNSPEC_REDUC_SUM "redsum")









+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")









+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")









+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")









+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")









+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")









+])









+









+(define_code_iterator neg_unop [neg])









+(define_code_iterator not_unop [not])









+









+(define_code_iterator any_float_unop_neg [neg])









+(define_code_iterator any_float_unop_abs [abs])









+









+(define_mode_iterator V_VLS_VT [V VLS VT])









+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])









+









+(define_split









+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")









+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]









+  "TARGET_XTHEADVECTOR"









+  [(const_int 0)]









+  {









+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],









+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));









+    DONE;









+  })









+









+(define_insn_and_split "@pred_th_whole_mov<mode>"









+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")









+ (unspec:V_VLS_VT









+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")









+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")









+    (match_operand 3 "const_1_operand"         "  i, i, i")









+    (reg:SI VL_REGNUM)









+    (reg:SI VTYPE_REGNUM)]









+ UNSPEC_TH_VWLDST))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vmv.v.v\t%0,%1









+   vle.v\t%0,%1









+   vse.v\t%1,%0"









+  "&& REG_P (operands[0]) && REG_P (operands[1])









+   && REGNO (operands[0]) == REGNO (operands[1])"









+  [(const_int 0)]









+  ""









+  [(set_attr "type" "vimov,vlds,vlds")









+   (set_attr "mode" "<MODE>")









+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))









+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))









+   (set (attr "avl_type_idx") (const_int 3))









+   (set_attr "vl_op_idx" "2")])









+









+(define_insn_and_split "@pred_th_whole_mov<mode>"









+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")









+ (unspec:VB









+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")









+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")









+    (match_operand 3 "const_1_operand"         "  i, i, i")









+    (reg:SI VL_REGNUM)









+    (reg:SI VTYPE_REGNUM)]









+ UNSPEC_TH_VWLDST))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vmv.v.v\t%0,%1









+   vle.v\t%0,%1









+   vse.v\t%1,%0"









+  "&& REG_P (operands[0]) && REG_P (operands[1])









+   && REGNO (operands[0]) == REGNO (operands[1])"









+  [(const_int 0)]









+  ""









+  [(set_attr "type" "vimov,vlds,vlds")









+   (set_attr "mode" "<MODE>")









+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))









+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))









+   (set (attr "avl_type_idx") (const_int 3))









+   (set_attr "vl_op_idx" "2")









+   (set (attr "sew") (const_int 8))









+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])









+









+(define_expand "@pred_th_mov<mode>"









+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")









+    (if_then_else:V_VLS









+      (unspec:<VM>









+        [(match_operand:<VM> 1 "vector_mask_operand")









+         (match_operand 4 "vector_length_operand")









+         (match_operand 5 "const_int_operand")









+         (match_operand 6 "const_int_operand")









+         (match_operand 7 "const_int_operand")









+         (reg:SI VL_REGNUM)









+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+      (match_operand:V_VLS 3 "vector_move_operand")









+      (match_operand:V_VLS 2 "vector_merge_operand")))]









+  "TARGET_XTHEADVECTOR"









+  {})









+









+(define_insn_and_split "*pred_broadcast<mode>"









+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")









+ (if_then_else:V_VLSI









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")









+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (vec_duplicate:V_VLSI









+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))









+   (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vmv.v.x\t%0,%3









+   vmv.v.x\t%0,%3









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero









+   vlse.v\t%0,%3,zero









+   vmv.s.x\t%0,%3









+   vmv.s.x\t%0,%3"









+  "(register_operand (operands[3], <VEL>mode)









+  || CONST_POLY_INT_P (operands[3]))









+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"









+  [(set (match_dup 0)









+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)









+      (match_dup 5) (match_dup 6) (match_dup 7)









+      (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (vec_duplicate:V_VLSI (match_dup 3))









+   (match_dup 2)))]









+  {









+    gcc_assert (can_create_pseudo_p ());









+    if (CONST_POLY_INT_P (operands[3]))









+      {









+ rtx tmp = gen_reg_rtx (<VEL>mode);









+ emit_move_insn (tmp, operands[3]);









+ operands[3] = tmp;









+      }









+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),









+ GET_MODE_ALIGNMENT (<VEL>mode));









+    m = validize_mem (m);









+    emit_move_insn (m, operands[3]);









+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));









+    operands[3] = m;









+









+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:









+       andi a2,a2,1









+       vsetvl zero,a2,e64









+       vlse64.v  */









+    if (satisfies_constraint_Wb1 (operands[1]))









+      {









+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);









+ operands[1] = CONSTM1_RTX (<VM>mode);









+      }









+  }









+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_broadcast<mode>"









+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")









+ (if_then_else:V_VLSF_ZVFHMIN









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")









+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (vec_duplicate:V_VLSF_ZVFHMIN









+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))









+   (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   vfmv.v.f\t%0,%3









+   vfmv.v.f\t%0,%3









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero,%1.t









+   vlse.v\t%0,%3,zero









+   vlse.v\t%0,%3,zero









+   vfmv.s.f\t%0,%3









+   vfmv.s.f\t%0,%3"









+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")









+   (set_attr "mode" "<MODE>")])









+









+;; vle.v/vse.v,vmv.v.v









+(define_insn_and_split "*pred_th_mov<mode>"









+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")









+    (if_then_else:V_VLS









+      (unspec:<VM>









+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")









+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")









+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")









+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")









+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")









+         (reg:SI VL_REGNUM)









+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")









+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]









+  "(TARGET_XTHEADVECTOR









+    && (register_operand (operands[0], <MODE>mode)









+        || register_operand (operands[3], <MODE>mode)))"









+  "@









+   vle.v\t%0,%3%p1









+   vle.v\t%0,%3









+   vle.v\t%0,%3,%1.t









+   vse.v\t%3,%0%p1









+   vmv.v.v\t%0,%3









+   vmv.v.v\t%0,%3"









+  "&& register_operand (operands[0], <MODE>mode)









+   && register_operand (operands[3], <MODE>mode)









+   && satisfies_constraint_vu (operands[2])









+   && INTVAL (operands[7]) == riscv_vector::VLMAX"









+  [(set (match_dup 0) (match_dup 3))]









+  ""









+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn_and_split "@pred_th_mov<mode>"









+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")









+ (if_then_else:VB_VLS









+   (unspec:VB_VLS









+     [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")









+      (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")









+      (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")









+   (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+   #









+   #









+   vmcpy.m\t%0,%3









+   vmclr.m\t%0









+   vmset.m\t%0"









+  "&& !reload_completed"









+  [(const_int 0)]









+  {









+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))









+        || (REG_P (operands[0]) && REG_P (operands[3])









+     && INTVAL (operands[5]) == riscv_vector::VLMAX))









+      {









+ emit_move_insn (operands[0], operands[3]);









+ DONE;









+      }









+









+    FAIL;









+  }









+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_store<mode>"









+  [(set (match_operand:V 0 "memory_operand"                 "+m")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")









+      (match_operand 3 "vector_length_operand"    "   rK")









+      (match_operand 4 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operand:V 2 "register_operand"         "    vr")









+   (match_dup 0)))]









+  "TARGET_XTHEADVECTOR"









+  "vse.v\t%2,%0%p1"









+  [(set_attr "type" "vste")









+   (set_attr "mode" "<MODE>")









+   (set (attr "avl_type_idx") (const_int 4))









+   (set_attr "vl_op_idx" "3")])









+









+(define_insn "@pred_th_strided_load<mode>"









+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")









+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")









+      (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")









+      (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)









+   (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]









+  "TARGET_XTHEADVECTOR"









+  "@









+  vlse.v\t%0,%3,%z4%p1









+  vlse.v\t%0,%3,%z4









+  vlse.v\t%0,%3,%z4,%1.t









+  vle.v\t%0,%3%p1









+  vle.v\t%0,%3









+  vle.v\t%0,%3,%1.t"









+  [(set_attr "type" "vlds")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_strided_store<mode>"









+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK,       rK")









+      (match_operand 5 "const_int_operand"        "    i,        i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")









+      (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)









+   (match_dup 0)))]









+  "TARGET_XTHEADVECTOR"









+  "@









+  vsse.v\t%3,%0,%z2%p1









+  vse.v\t%3,%0%p1"









+  [(set_attr "type" "vsts")









+   (set_attr "mode" "<MODE>")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+









+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"









+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")









+      (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")









+      (match_operand 6 "const_int_operand"         "  i,  i, i,  i")









+      (match_operand 7 "const_int_operand"         "  i,  i, i,  i")









+      (match_operand 8 "const_int_operand"         "  i,  i, i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)









+   (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+;; DEST eew is greater than SOURCE eew.









+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"









+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")









+ (if_then_else:VEEWEXT2









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "    i,    i")









+      (match_operand 7 "const_int_operand"                      "    i,    i")









+      (match_operand 8 "const_int_operand"                      "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWEXT2









+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)









+   (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"









+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")









+ (if_then_else:VEEWEXT4









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "    i,    i")









+      (match_operand 7 "const_int_operand"                      "    i,    i")









+      (match_operand 8 "const_int_operand"                      "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWEXT4









+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)









+   (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"









+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")









+ (if_then_else:VEEWEXT8









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "    i,    i")









+      (match_operand 7 "const_int_operand"                      "    i,    i")









+      (match_operand 8 "const_int_operand"                      "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWEXT8









+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)









+   (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+;; DEST eew is smaller than SOURCE eew.









+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"









+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")









+ (if_then_else:VEEWTRUNC2









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWTRUNC2









+     [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)









+   (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"









+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")









+ (if_then_else:VEEWTRUNC4









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWTRUNC4









+     [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)









+   (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"









+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")









+ (if_then_else:VEEWTRUNC8









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VEEWTRUNC8









+     [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)









+   (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxe.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vld<order>x")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO64I 2 "register_operand" "  vr")









+    (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO64:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO32I 2 "register_operand" "  vr")









+    (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO32:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO16I 2 "register_operand" "  vr")









+    (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO16:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO8I 2 "register_operand" "  vr")









+    (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO8:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")









+    (match_operand:RATIO4I 2 "register_operand" "  vr")









+    (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO4:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")









+    (match_operand:RATIO2I 2 "register_operand"  "  vr")









+    (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO2:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")









+    (match_operand:RATIO1 2 "register_operand"   "  vr")









+    (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vstux")









+   (set_attr "mode" "<RATIO1:MODE>")])









+









+(define_insn "@pred_th_popcount<VB:mode><P:mode>"









+  [(set (match_operand:P 0 "register_operand"               "=r")









+ (popcount:P









+   (unspec:VB









+     [(and:VB









+        (match_operand:VB 1 "vector_mask_operand" "vmWc1")









+        (match_operand:VB 2 "register_operand"    "   vr"))









+      (match_operand 3 "vector_length_operand"    "   rK")









+      (match_operand 4 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]









+  "TARGET_XTHEADVECTOR"









+  "vmpopc.m\t%0,%2%p1"









+  [(set_attr "type" "vmpop")









+   (set_attr "mode" "<VB:MODE>")])









+









+(define_insn "@pred_th_ffs<VB:mode><P:mode>"









+  [(set (match_operand:P 0 "register_operand"                 "=r")









+ (plus:P









+   (ffs:P









+     (unspec:VB









+       [(and:VB









+          (match_operand:VB 1 "vector_mask_operand" "vmWc1")









+          (match_operand:VB 2 "register_operand"    "   vr"))









+        (match_operand 3 "vector_length_operand"    "   rK")









+        (match_operand 4 "const_int_operand"        "    i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))









+   (const_int -1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmfirst.m\t%0,%2%p1"









+  [(set_attr "type" "vmffs")









+   (set_attr "mode" "<VB:MODE>")])









+









+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"









+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<VNCONVERT>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:<VNCONVERT>









+      [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)









+   (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"









+  [(set_attr "type" "vfncvtftoi")









+   (set_attr "mode" "<VNCONVERT>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_narrow_<float_cvt><mode>"









+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<VNCONVERT>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float:<VNCONVERT>









+      (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))









+   (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vfncvt.f.x<u>.v\t%0,%3%p1"









+  [(set_attr "type" "vfncvtitof")









+   (set_attr "mode" "<VNCONVERT>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_narrow_<optab><mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (truncate:<V_DOUBLE_TRUNC>









+     (any_shiftrt:VWEXTI









+      (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")









+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vn<insn>.v%o4\t%0,%3,%v4%p1"









+  [(set_attr "type" "vnshift")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+(define_insn "@pred_th_narrow_<optab><mode>_scalar"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (truncate:<V_DOUBLE_TRUNC>









+     (any_shiftrt:VWEXTI









+      (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")









+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vn<insn>.v%o4\t%0,%3,%4%p1"









+  [(set_attr "type" "vnshift")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+(define_insn "@pred_th_trunc<mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (truncate:<V_DOUBLE_TRUNC>









+     (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vnsrl.vx\t%0,%3,x0%p1"









+  [(set_attr "type" "vnshift")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_trunc<mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (float_truncate:<V_DOUBLE_TRUNC>









+      (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vfncvt.f.f.v\t%0,%3%p1"









+  [(set_attr "type" "vfncvtftof")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_fault_load<mode>"









+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")









+ (if_then_else:V









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")









+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")









+      (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V









+     [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)









+   (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))









+   (set (reg:SI VL_REGNUM)









+   (unspec:SI









+     [(if_then_else:V









+        (unspec:<VM>









+ [(match_dup 1) (match_dup 4) (match_dup 5)









+ (match_dup 6) (match_dup 7)









+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+        (unspec:V [(match_dup 3)] UNSPEC_VLEFF)









+        (match_dup 2))] UNSPEC_MODIFY_VL))]









+  "TARGET_XTHEADVECTOR"









+  "vleff.v\t%0,%3%p1"









+  [(set_attr "type" "vldff")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_unit_strided_load<mode>"









+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")









+ (if_then_else:VT









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")









+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")









+      (match_operand 5 "const_int_operand"        "    i,     i,     i")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VT









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)









+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]









+  "TARGET_XTHEADVECTOR"









+  "vlseg<nf>e.v\t%0,(%z3)%p1"









+  [(set_attr "type" "vlsegde")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_unit_strided_store<mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+       (match_operand 3 "vector_length_operand"    "   rK")









+       (match_operand 4 "const_int_operand"        "    i")









+       (reg:SI VL_REGNUM)









+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand:VT 2 "register_operand"         "   vr")









+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]









+  "TARGET_XTHEADVECTOR"









+  "vsseg<nf>e.v\t%2,(%z1)%p0"









+  [(set_attr "type" "vssegte")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_strided_load<mode>"









+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")









+ (if_then_else:VT









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")









+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i")









+      (match_operand 8 "const_int_operand"        "    i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VT









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (mem:BLK (scratch))] UNSPEC_STRIDED)









+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]









+  "TARGET_XTHEADVECTOR"









+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"









+  [(set_attr "type" "vlsegds")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_strided_store<mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+       (match_operand 4 "vector_length_operand"    "   rK")









+       (match_operand 5 "const_int_operand"        "    i")









+       (reg:SI VL_REGNUM)









+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")









+    (match_operand:VT 3 "register_operand"         "   vr")









+    (mem:BLK (scratch))] UNSPEC_STRIDED))]









+  "TARGET_XTHEADVECTOR"









+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"









+  [(set_attr "type" "vssegts")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_fault_load<mode>"









+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")









+ (if_then_else:VT









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")









+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")









+      (match_operand 5 "const_int_operand"        "    i,     i,     i")









+      (match_operand 6 "const_int_operand"        "    i,     i,     i")









+      (match_operand 7 "const_int_operand"        "    i,     i,     i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:VT









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")









+      (mem:BLK (scratch))] UNSPEC_VLEFF)









+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))









+   (set (reg:SI VL_REGNUM)









+        (unspec:SI









+          [(if_then_else:VT









+      (unspec:<VM>









+        [(match_dup 1) (match_dup 4) (match_dup 5)









+         (match_dup 6) (match_dup 7)









+         (reg:SI VL_REGNUM)









+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+      (unspec:VT









+         [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)









+      (match_dup 2))] UNSPEC_MODIFY_VL))]









+  "TARGET_XTHEADVECTOR"









+  "vlseg<nf>eff.v\t%0,(%z3)%p1"









+  [(set_attr "type" "vlsegdff")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"









+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V1T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V1T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V1T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"









+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V2T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V2T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V2T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"









+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V4T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V4T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V4T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"









+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")









+ (if_then_else:V8T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V8T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)









+   (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V8T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"









+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")









+ (if_then_else:V16T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V16T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)









+   (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V16T:MODE>")])









+









+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"









+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")









+ (if_then_else:V32T









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"    "   rK,   rK")









+      (match_operand 6 "const_int_operand"        "    i,    i")









+      (match_operand 7 "const_int_operand"        "    i,    i")









+      (match_operand 8 "const_int_operand"        "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:V32T









+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")









+      (mem:BLK (scratch))









+      (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)









+   (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]









+  "TARGET_XTHEADVECTOR"









+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"









+  [(set_attr "type" "vlsegd<order>x")









+   (set_attr "mode" "<V32T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO64I 2 "register_operand"       "   vr")









+    (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V1T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO32I 2 "register_operand"       "   vr")









+    (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V2T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO16I 2 "register_operand"       "   vr")









+    (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V4T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO8I 2 "register_operand"       "   vr")









+    (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V8T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO4I 2 "register_operand"      "   vr")









+    (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V16T:MODE>")])









+









+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"









+  [(set (mem:BLK (scratch))









+ (unspec:BLK









+   [(unspec:<VM>









+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")









+      (match_operand 4 "vector_length_operand"    "   rK")









+      (match_operand 5 "const_int_operand"        "    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")









+    (match_operand:RATIO2I 2 "register_operand"      "   vr")









+    (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]









+  "TARGET_XTHEADVECTOR"









+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";









+  [(set_attr "type" "vssegtux")









+   (set_attr "mode" "<V32T:MODE>")])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop_neg:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vfsgnjn.vv\t%0,%3,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop_abs:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vfsgnjx.vv\t%0,%3,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")









+ (if_then_else:V_VLSI









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")









+      (match_operand 5 "const_int_operand"        " i, i,  i,  i")









+      (match_operand 6 "const_int_operand"        " i, i,  i,  i")









+      (match_operand 7 "const_int_operand"        " i, i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (not_unop:V_VLSI









+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))









+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vnot.v\t%0,%3%p1"









+  [(set_attr "type" "vialu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")









+ (if_then_else:V_VLSI









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")









+      (match_operand 5 "const_int_operand" " i, i,  i,  i")









+      (match_operand 6 "const_int_operand" " i, i,  i,  i")









+      (match_operand 7 "const_int_operand" " i, i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (neg_unop:V_VLSI









+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))









+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vrsub.vx\t%0,%3,x0%p1"









+  [(set_attr "type" "vialu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vf<insn>.v\t%0,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_narrow_clip<v_su><mode>"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:<V_DOUBLE_TRUNC>









+     [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")









+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"









+  [(set_attr "type" "vnclip")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"









+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")









+ (if_then_else:<V_DOUBLE_TRUNC>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")









+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")









+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)









+   (unspec:<V_DOUBLE_TRUNC>









+     [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")









+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)









+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR"









+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"









+  [(set_attr "type" "vnclip")









+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])









+









+;; Float Reduction Sum (vfred[ou]sum.vs)









+(define_insn "@pred_th_<th_reduc_op><mode>"









+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")









+ (unspec:<V_LMUL1>









+   [(unspec:<VM>









+     [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")









+      (match_operand               5 "vector_length_operand" "   rK,   rK")









+      (match_operand               6 "const_int_operand"     "    i,    i")









+      (match_operand               7 "const_int_operand"     "    i,    i")









+      (match_operand               8 "const_int_operand"     "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+           (unspec:<V_LMUL1> [









+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")









+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")









+           ] ANY_FREDUC_SUM)









+    (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]









+  "TARGET_XTHEADVECTOR"









+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"









+  [(set_attr "type" "vfred<order>")









+   (set_attr "mode" "<MODE>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)









+(define_insn "@pred_th_<th_reduc_op><mode>"









+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")









+ (unspec:<V_EXT_LMUL1>









+   [(unspec:<VM>









+     [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")









+      (match_operand                5 "vector_length_operand" "   rK,   rK")









+      (match_operand                6 "const_int_operand"     "    i,    i")









+      (match_operand                7 "const_int_operand"     "    i,    i")









+      (match_operand                8 "const_int_operand"     "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+           (unspec:<V_EXT_LMUL1> [









+      (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")









+      (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")









+           ] ANY_FWREDUC_SUM)









+    (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]









+  "TARGET_XTHEADVECTOR"









+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"









+  [(set_attr "type" "vfwred<order>")









+   (set_attr "mode" "<MODE>")









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])









+









+(define_insn "@pred_th_madc<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")









+ (unspec:<VM>









+    [(plus:VI









+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")









+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))









+     (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")









+        (match_operand 5 "const_int_operand"     "   i,   i,   i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.v%o2m\t%0,%1,%v2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_msbc<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")









+ (unspec:<VM>









+    [(minus:VI









+      (match_operand:VI 1 "register_operand"     "  vr")









+      (match_operand:VI 2 "register_operand"     " vr"))









+     (match_operand:<VM> 3 "register_operand"    " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand" " rK")









+        (match_operand 5 "const_int_operand"     "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vvm\t%0,%1,%2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_madc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "register_operand" "  r"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vxm\t%0,%1,%2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_msbc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_expand "@pred_th_madc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (match_operand:<VM> 3 "register_operand")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand")









+        (match_operand 5 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[4],









+ <MODE>mode,









+ riscv_vector::simm5_p (operands[2]),









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4], operands[5]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[5])))









+    DONE;









+})









+









+(define_insn "*pred_th_madc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "*pred_th_madc<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (match_operand:<VM> 3 "register_operand"          " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"       " rK")









+        (match_operand 5 "const_int_operand"           "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_expand "@pred_th_msbc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (match_operand:<VM> 3 "register_operand")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand")









+        (match_operand 5 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[4],









+ <MODE>mode,









+ false,









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4], operands[5]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[5])))









+    DONE;









+})









+









+(define_insn "*pred_th_msbc<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (match_operand:<VM> 3 "register_operand"     " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"  " rK")









+        (match_operand 5 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "*pred_th_msbc<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (match_operand:<VM> 3 "register_operand"          " vm")









+     (unspec:<VM>









+       [(match_operand 4 "vector_length_operand"       " rK")









+        (match_operand 5 "const_int_operand"           "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vxm\t%0,%1,%z2,%3"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "avl_type_idx") (const_int 5))])









+









+(define_insn "@pred_th_madc<mode>_overflow"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")









+ (unspec:<VM>









+    [(plus:VI









+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")









+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")









+        (match_operand 4 "const_int_operand"     "   i,   i,   i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.v%o2\t%0,%1,%v2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "@pred_th_msbc<mode>_overflow"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI









+      (match_operand:VI 1 "register_operand"     "   vr")









+      (match_operand:VI 2 "register_operand"     "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand" "  rK")









+        (match_operand 4 "const_int_operand"     "   i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vv\t%0,%1,%2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "@pred_th_madc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "@pred_th_msbc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_QHS









+      (vec_duplicate:VI_QHS









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_expand "@pred_th_madc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand")









+        (match_operand 4 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[3],









+ <MODE>mode,









+ riscv_vector::simm5_p (operands[2]),









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[4])))









+    DONE;









+})









+









+(define_insn "*pred_th_madc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")









+ (unspec:<VM>









+    [(plus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"       " rK")









+        (match_operand 4 "const_int_operand"           "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmadc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_expand "@pred_th_msbc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_int_operand"))









+      (match_operand:VI_D 1 "register_operand"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand")









+        (match_operand 4 "const_int_operand")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+{









+  if (riscv_vector::sew64_scalar_helper (









+ operands,









+ /* scalar op */&operands[2],









+ /* vl */operands[3],









+ <MODE>mode,









+ false,









+ [] (rtx *operands, rtx boardcast_scalar) {









+   emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],









+        boardcast_scalar, operands[3], operands[4]));









+        },









+ (riscv_vector::avl_type) INTVAL (operands[4])))









+    DONE;









+})









+









+(define_insn "*pred_th_msbc<mode>_overflow_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))









+      (match_operand:VI_D 1 "register_operand"    "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"  " rK")









+        (match_operand 4 "const_int_operand"      "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")









+ (unspec:<VM>









+    [(minus:VI_D









+      (vec_duplicate:VI_D









+        (sign_extend:<VEL>









+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))









+      (match_operand:VI_D 1 "register_operand"         "  vr"))









+     (unspec:<VM>









+       [(match_operand 3 "vector_length_operand"      " rK")









+        (match_operand 4 "const_int_operand"          "  i")









+        (reg:SI VL_REGNUM)









+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]









+  "TARGET_XTHEADVECTOR"









+  "vmsbc.vx\t%0,%1,%z2"









+  [(set_attr "type" "vicalu")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "3")









+   (set (attr "avl_type_idx") (const_int 4))])









+









+(define_insn "*th_vsetvl<mode>"









+  [(set (match_operand:P 0 "register_operand" "=r")









+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")









+    (match_operand 2 "const_int_operand" "i")









+    (match_operand 3 "const_int_operand" "i")









+    (match_operand 4 "const_int_operand" "i")









+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))









+   (set (reg:SI VL_REGNUM)









+ (unspec:SI [(match_dup 1)









+     (match_dup 2)









+     (match_dup 3)] UNSPEC_VSETVL))









+   (set (reg:SI VTYPE_REGNUM)









+ (unspec:SI [(match_dup 2)









+     (match_dup 3)









+     (match_dup 4)









+     (match_dup 5)] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "vsetvli\t%0,%1,e%2,%m3"









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "<MODE>")









+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))









+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))









+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))









+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])









+









+;; vsetvl zero,zero,vtype instruction.









+;; This pattern has no side effects and does not set X0 register.









+(define_insn "*th_vsetvl_vtype_change_only"









+  [(set (reg:SI VTYPE_REGNUM)









+ (unspec:SI









+   [(match_operand 0 "const_int_operand" "i")









+    (match_operand 1 "const_int_operand" "i")









+    (match_operand 2 "const_int_operand" "i")









+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "vsetvli\tzero,zero,e%0,%m1"









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "SI")









+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))









+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))









+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))









+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])









+









+;; vsetvl zero,rs1,vtype instruction.









+;; The reason we need this pattern since we should avoid setting X0 register









+;; in vsetvl instruction pattern.









+(define_insn "*th_vsetvl_discard_result<mode>"









+  [(set (reg:SI VL_REGNUM)









+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")









+     (match_operand 1 "const_int_operand" "i")









+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))









+   (set (reg:SI VTYPE_REGNUM)









+ (unspec:SI [(match_dup 1)









+     (match_dup 2)









+     (match_operand 3 "const_int_operand" "i")









+     (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "vsetvli\tzero,%0,e%1,%m2"









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "<MODE>")









+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))









+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))









+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))









+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])









+









+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.









+;; Since we have many optmization passes from "expand" to "reload_completed",









+;; such pattern can allow us gain benefits of these optimizations.









+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"









+  [(set (match_operand:P 0 "register_operand" "=r")









+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")









+    (match_operand 2 "const_int_operand" "i")









+    (match_operand 3 "const_int_operand" "i")









+    (match_operand 4 "const_int_operand" "i")









+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]









+  "TARGET_XTHEADVECTOR"









+  "#"









+  "&& epilogue_completed"









+  [(parallel









+    [(set (match_dup 0)









+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)









+      (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))









+     (set (reg:SI VL_REGNUM)









+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))









+     (set (reg:SI VTYPE_REGNUM)









+   (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)









+       (match_dup 5)] UNSPEC_VSETVL))])]









+  ""









+  [(set_attr "type" "vsetvl")









+   (set_attr "mode" "SI")])









+









+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"        "   0")









+      (match_operand 5 "vector_length_operand"        "  rK")









+      (match_operand 6 "const_int_operand"            "   i")









+      (match_operand 7 "const_int_operand"            "   i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_ltge_operator"









+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")









+       (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"        "   0")









+      (match_operand 5 "vector_length_operand"        "  rK")









+      (match_operand 6 "const_int_operand"            "   i")









+      (match_operand 7 "const_int_operand"            "   i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "ltge_operator"









+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")









+       (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_ltge<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_ltge<mode>_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "ltge_operator"









+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")









+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.v%o5\t%0,%4,%v5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"          "  0")









+      (match_operand 5 "vector_length_operand"          " rK")









+      (match_operand 6 "const_int_operand"              "  i")









+      (match_operand 7 "const_int_operand"              "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")









+       (vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 4 "register_operand"      "  r"))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")









+       (vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"           "  0")









+      (match_operand 5 "vector_length_operand"           " rK")









+      (match_operand 6 "const_int_operand"               "  i")









+      (match_operand 7 "const_int_operand"               "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 4 "register_operand"       "  r"))









+       (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))









+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_eqne<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_QHS









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))









+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"           "  0")









+      (match_operand 5 "vector_length_operand"           " rK")









+      (match_operand 6 "const_int_operand"               "  i")









+      (match_operand 7 "const_int_operand"               "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 3 "register_operand"          " vr")









+       (vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 4 "register_operand"       "  r"))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"           "  0")









+      (match_operand 5 "vector_length_operand"           " rK")









+      (match_operand 6 "const_int_operand"               "  i")









+      (match_operand 7 "const_int_operand"               "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 4 "register_operand"       "  r"))









+       (match_operand:V_VLSI_D 3 "register_operand"          " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])









+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))









+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_eqne<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))









+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"          "  0")









+      (match_operand 5 "vector_length_operand"          " rK")









+      (match_operand 6 "const_int_operand"              "  i")









+      (match_operand 7 "const_int_operand"              "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 3 "register_operand"         " vr")









+       (vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 4 "register_operand" "  r")))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "comparison_except_eqge_operator"









+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"            "  0")









+      (match_operand 5 "vector_length_operand"            " rK")









+      (match_operand 6 "const_int_operand"                "  i")









+      (match_operand 7 "const_int_operand"                "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 4 "register_operand"   "  r")))









+       (match_operand:V_VLSI_D 3 "register_operand"           " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vms%B2.vx\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_extended_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))









+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSI_D









+         (sign_extend:<VEL>









+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))









+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vms%B3.vx\t%0,%4,%5%p1"









+  [(set_attr "type" "vicmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")









+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vmf%B3.vv\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"          "  0")









+      (match_operand 5 "vector_length_operand"          " rK")









+      (match_operand 6 "const_int_operand"              "  i")









+      (match_operand 7 "const_int_operand"              "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "signed_order_operator"









+      [(match_operand:V_VLSF 3 "register_operand"           " vr")









+       (match_operand:V_VLSF 4 "register_operand"           " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmf%B2.vv\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")









+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vmf%B3.vv\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"         "  0")









+      (match_operand 5 "vector_length_operand"         " rK")









+      (match_operand 6 "const_int_operand"             "  i")









+      (match_operand 7 "const_int_operand"             "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "signed_order_operator"









+      [(match_operand:V_VLSF 3 "register_operand"      " vr")









+       (vec_duplicate:V_VLSF









+         (match_operand:<VEL> 4 "register_operand"     "  f"))])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmf%B2.vf\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_cmp<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")









+       (vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_cmp<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "signed_order_operator"









+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")









+       (vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"









+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "register_operand"         "  0")









+      (match_operand 5 "vector_length_operand"         " rK")









+      (match_operand 6 "const_int_operand"             "  i")









+      (match_operand 7 "const_int_operand"             "  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 2 "equality_operator"









+      [(vec_duplicate:V_VLSF









+         (match_operand:<VEL> 4 "register_operand"     "  f"))









+       (match_operand:V_VLSF 3 "register_operand"      " vr")])









+   (match_dup 1)))]









+  "TARGET_XTHEADVECTOR"









+  "vmf%B2.vf\t%0,%3,%4,v0.t"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")









+   (set_attr "merge_op_idx" "1")









+   (set_attr "vl_op_idx" "5")









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))])









+









+;; We don't use early-clobber for LMUL <= 1 to get better codegen.









+(define_insn "*pred_th_eqne<mode>_scalar"









+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))









+       (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









+









+;; We use early-clobber for source LMUL > dest LMUL.









+(define_insn "*pred_th_eqne<mode>_scalar_narrow"









+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")









+ (if_then_else:<VM>









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")









+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")









+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)









+   (match_operator:<VM> 3 "equality_operator"









+      [(vec_duplicate:V_VLSF









+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))









+       (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])









+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]









+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"









+  "vmf%B3.vf\t%0,%4,%5%p1"









+  [(set_attr "type" "vfcmp")









+   (set_attr "mode" "<MODE>")])









\ No newline at end of file









diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md









index 5f5f7b5b986..c0fc7a2441d 100644









--- a/gcc/config/riscv/vector-iterators.md









+++ b/gcc/config/riscv/vector-iterators.md









@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [









])









(define_mode_iterator VI [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -128,11 +128,11 @@ (define_mode_iterator VI [









;; allow the instruction and mode to be matched during combine et al.









(define_mode_iterator VF [









  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")









-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -140,11 +140,11 @@ (define_mode_iterator VF [









(define_mode_iterator VF_ZVFHMIN [









  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [









])









(define_mode_iterator VEEWEXT2 [









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [









])









(define_mode_iterator VEEWEXT4 [









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [









])









(define_mode_iterator VEEWTRUNC2 [









-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









  (RVVM4SI "TARGET_64BIT")









  (RVVM2SI "TARGET_64BIT")









  (RVVM1SI "TARGET_64BIT")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









])









(define_mode_iterator VEEWTRUNC4 [









-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM2HI "TARGET_64BIT")









  (RVVM1HI "TARGET_64BIT")









-  (RVVMF2HI "TARGET_64BIT")









-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")









+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









  (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









])









(define_mode_iterator VEEWTRUNC8 [









  (RVVM1QI "TARGET_64BIT")









-  (RVVMF2QI "TARGET_64BIT")









-  (RVVMF4QI "TARGET_64BIT")









-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")









+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")









+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")









+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")









])









(define_mode_iterator VEI16 [









-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [









])









(define_mode_iterator VFULLI [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")









@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [









])









(define_mode_iterator VI_QH [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









])









(define_mode_iterator VI_QHS [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")









  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")









@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [









])









(define_mode_iterator VI_QHS_NO_M8 [









-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")









  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")









@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [









(define_mode_iterator VF_HS [









  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")









-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")









  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")









@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [









  (RVVM4HF "TARGET_ZVFH")









  (RVVM2HF "TARGET_ZVFH")









  (RVVM1HF "TARGET_ZVFH")









-  (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")









  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")









@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [









])









(define_mode_iterator V_VLSI_QHS [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")









  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")









@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [









;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or









;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.









(define_mode_iterator RATIO64 [









-  (RVVMF8QI "TARGET_MIN_VLEN > 32")









-  (RVVMF4HI "TARGET_MIN_VLEN > 32")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM1DI "TARGET_VECTOR_ELEN_64")









-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









])









(define_mode_iterator RATIO32 [









-  RVVMF4QI









-  RVVMF2HI









+  (RVVMF4QI "!TARGET_XTHEADVECTOR")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR")









  RVVM1SI









  (RVVM2DI "TARGET_VECTOR_ELEN_64")









-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")









])









(define_mode_iterator RATIO16 [









-  RVVMF2QI









+  (RVVMF2QI "!TARGET_XTHEADVECTOR")









  RVVM1HI









  RVVM2SI









  (RVVM4DI "TARGET_VECTOR_ELEN_64")









@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [









])









(define_mode_iterator RATIO64I [









-  (RVVMF8QI "TARGET_MIN_VLEN > 32")









-  (RVVMF4HI "TARGET_MIN_VLEN > 32")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









])









(define_mode_iterator RATIO32I [









-  RVVMF4QI









-  RVVMF2HI









+  (RVVMF4QI "!TARGET_XTHEADVECTOR")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR")









  RVVM1SI









  (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









])









(define_mode_iterator RATIO16I [









-  RVVMF2QI









+  (RVVMF2QI "!TARGET_XTHEADVECTOR")









  RVVM1HI









  RVVM2SI









  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [









])









(define_mode_iterator V_FRACT [









-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")









-  (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









])









(define_mode_iterator VWEXTI [









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [









  (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [









(define_mode_iterator VWCONVERTI [









  (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")









-  (RVVMF2SI "TARGET_ZVFH")









+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")









  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")









@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [









])









(define_mode_iterator VQEXTI [









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")









  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")









@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [









;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].









(define_mode_iterator VINDEXED [









-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")









+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")









+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")









+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")









  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [









  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")









  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")









-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")









  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")









@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [









(define_mode_iterator V_VLS_F_CONVERT_SI [









  (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")









-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [









])









(define_mode_iterator V_VLS_F_CONVERT_DI [









-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")









-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")









+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")









+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")









  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")









  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")









-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")









  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")









  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")









diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md









index 036b2425f32..9941651341d 100644









--- a/gcc/config/riscv/vector.md









+++ b/gcc/config/riscv/vector.md









@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"









;; check. However, we need default value of SEW for vsetvl instruction since there









;; is no field for ratio in the vsetvl instruction encoding.









(define_attr "sew" ""









-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\









+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\









  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\









  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\









  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\









@@ -95,6 +95,18 @@ (define_attr "sew" ""









  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\









  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")









(const_int 8)









+ (eq_attr "mode" "RVVMF16BI")









+    (if_then_else (match_test "TARGET_XTHEADVECTOR")









+      (const_int 16)









+      (const_int 8))









+ (eq_attr "mode" "RVVMF32BI")









+    (if_then_else (match_test "TARGET_XTHEADVECTOR")









+      (const_int 32)









+      (const_int 8))









+ (eq_attr "mode" "RVVMF64BI")









+    (if_then_else (match_test "TARGET_XTHEADVECTOR")









+      (const_int 64)









+      (const_int 8))









(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\









  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\









  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\









@@ -155,9 +167,9 @@ (define_attr "vlmul" ""









(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")









(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")









(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")









- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")









- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")









- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")









+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")









+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")









+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")









(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")









(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")









(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")









@@ -428,6 +440,10 @@ (define_attr "ratio" ""









  vislide1up,vislide1down,vfslide1up,vfslide1down,\









  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")









  (const_int INVALID_ATTRIBUTE)









+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\









+        vlsegdff,vssegtux,vlsegdox,vlsegdux")









+       (match_test "TARGET_XTHEADVECTOR"))









+    (const_int INVALID_ATTRIBUTE)









(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)









(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)









(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)









@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""









(symbol_ref "riscv_vector::FRM_DYN")]









(symbol_ref "riscv_vector::FRM_NONE")))









+(include "thead-vector.md")









+









;; -----------------------------------------------------------------









;; ---- Miscellaneous Operations









;; -----------------------------------------------------------------









@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"









(define_insn "*mov<mode>_whole"









  [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")









(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "@









    vl%m1re<sew>.v\t%0,%1









    vs%m1r.v\t%1,%0









@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"









(define_insn "*mov<mode>"









  [(set (match_operand:VB 0 "register_operand" "=vr")









(match_operand:VB 1 "register_operand" " vr"))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "vmv1r.v\t%0,%1"









  [(set_attr "type" "vmov")









    (set_attr "mode" "<MODE>")])









@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"









  (any_extend:VWEXTI









    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))









  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "v<sz>ext.vf2\t%0,%3%p1"









  [(set_attr "type" "vext")









    (set_attr "mode" "<MODE>")









@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"









  (any_extend:VQEXTI









    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))









  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "v<sz>ext.vf4\t%0,%3%p1"









  [(set_attr "type" "vext")









    (set_attr "mode" "<MODE>")









@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"









  (any_extend:VOEXTI









    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))









  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]









-  "TARGET_VECTOR"









+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"









  "v<sz>ext.vf8\t%0,%3%p1"









  [(set_attr "type" "vext")









    (set_attr "mode" "<MODE>")









diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c









index 2e0e12aa045..2eef9e1e1a8 100644









--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c









+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c









@@ -1,4 +1,4 @@









-/* { dg-do compile } */









+/* { dg-do compile { target { ! riscv_xtheadvector } } } */









/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */









void foo0 () {__rvv_bool64_t t;}









diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c









index 3d81b179235..ef329e30785 100644









--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c









+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c









@@ -1,4 +1,4 @@









/* { dg-do compile } */









/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */









-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */









+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */









diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp









index 7f13ff0ca56..70df6b1401c 100644









--- a/gcc/testsuite/lib/target-supports.exp









+++ b/gcc/testsuite/lib/target-supports.exp









@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {









    }]









}









+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.









+# Cache the result.









+









+proc check_effective_target_riscv_xtheadvector { } {









+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {









+       #ifndef __riscv_xtheadvector









+       #error "Not __riscv_xtheadvector"









+       #endif









+    }]









+}









+









+









# Return 1 if we can execute code when using dg-add-options riscv_v









proc check_effective_target_riscv_v_ok { } {









--









2.17.1































^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 14:55             ` 钟居哲
@ 2023-12-20 15:21               ` joshua
  2023-12-20 15:29                 ` 回复:[PATCH " 钟居哲
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-20 15:21 UTC (permalink / raw)
  To: 钟居哲, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 238280 bytes --]

Hi Juzhe,
All the patterns that I "copied" from current vector.md are necessary. The differences are beyond "th" prefix. They are actually different patterns since they generate totally different instructions apart from "th_" string.
We have already tried our best to eliminate extra patterns in thead-vector.md. You can refer to the difference list in our spec and find out whether these patterns are reduntant. 
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月20日(星期三) 22:55
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
My first impression, you are just copying current vector.md with no patterns change,
just simply add "th_" string into pattern name.
It looks odd to me. 
Take LLVM for example, even though the build up time for LLVM match table and tablegen is not an issue for now,
they still try hard to minimize the matchtable ,optimize the tablegen.
To me this patch just double the patterns, and potentially explode the patterns of RISC-V.
I think we should optimize thead vector patterns, eliminate the redundant unnecessary patterns to avoid affecting
the build up of GCC toolchain.
juzhe.zhong@rivai.ai
发件人: joshua
发送时间: 2023-12-20 22:41
收件人: 钟居哲; gcc-patches
抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu
主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
Hi Juzhe,
Yes, XTheadVector does not have vfneg.v as a pseudo instruction for vfsgnjn.vv.
We have listed all the differences between vector and xtheadvector in our spec. You may refer to it.
https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc <https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc >
https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd <https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd >
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月20日(星期三) 22:27
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
Why do you add this ?
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vf<insn>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
Theadvector is not th.vfneg.v ?
juzhe.zhong@rivai.ai
发件人: joshua
发送时间: 2023-12-20 22:24
收件人: 钟居哲; gcc-patches
抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu
主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
Hi Juzhe,
The patterns you supposed redundant are all necessary, because they generate different instructions from vector.
Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions,
and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月20日(星期三) 22:00
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
Why do you add these ?
+(define_insn "@pred_th_unit_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 2 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vsseg<nf>e.v\t%2,(%z1)%p0"
+ [(set_attr "type" "vssegte")
+ (set_attr "mode" "<MODE>")])
These patterns are redundant just names are different.
They should be removed.
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua)
Date: 2023-12-20 20:34
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
This patch is to handle the differences in instruction generation
between Vector and XTheadVector, adding th. prefix
to all XTheadVector instructions is not included.
For some vector patterns that cannot be avoided, we use
!TARGET_XTHEADVECTOR to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
gcc/ChangeLog:
* config.gcc: Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/t-riscv: Add new files.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector-builtins-functions.def: New file.
* config/riscv/thead-vector-builtins.cc: New file.
* config/riscv/thead-vector-builtins.h: New file.
* config/riscv/thead-vector.md: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc | 4 +-
gcc/config/riscv/autovec.md | 2 +-
gcc/config/riscv/predicates.md | 8 +-
gcc/config/riscv/riscv-string.cc | 3 +
gcc/config/riscv/riscv-v.cc | 13 +-
.../riscv/riscv-vector-builtins-shapes.cc | 23 +
gcc/config/riscv/riscv-vector-builtins.cc | 7 +
gcc/config/riscv/riscv-vector-builtins.h | 5 +-
gcc/config/riscv/riscv-vector-switch.def | 150 +-
gcc/config/riscv/riscv.cc | 20 +-
gcc/config/riscv/riscv_th_vector.h | 49 +
gcc/config/riscv/t-riscv | 16 +
.../riscv/thead-vector-builtins-functions.def | 627 ++++
gcc/config/riscv/thead-vector-builtins.cc | 746 +++++
gcc/config/riscv/thead-vector-builtins.h | 92 +
gcc/config/riscv/thead-vector.md | 2574 +++++++++++++++++
gcc/config/riscv/vector-iterators.md | 186 +-
gcc/config/riscv/vector.md | 36 +-
.../gcc.target/riscv/rvv/base/abi-1.c | 2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
gcc/testsuite/lib/target-supports.exp | 12 +
21 files changed, 4386 insertions(+), 191 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,9 +547,9 @@ riscv*)
extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
- extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"
 [(match_operand 0 "register_operand")
 (match_operand 1 "memory_operand")
 (match_operand:ANYI 2 "const_int_operand")]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 {
 riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 1a3a4f1ecbb..d910367e59c 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@ (define_predicate "csr_operand"
 (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
- (ior (match_operand 0 "const_csr_operand")
- (match_operand 0 "register_operand")))
+ (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+ (match_operand 0 "const_csr_operand"))
+ (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates. This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
 (ior (match_operand 0 "pmode_register_operand")
- (match_operand 0 "const_csr_operand")))
+ (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+ (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
 (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop # Any more?
ret # Return
 */
+ if (TARGET_XTHEADVECTOR)
+ return false;
+
 gcc_assert (TARGET_VECTOR);
 HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 486f5deb296..710332e17db 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)
 return true;
 }
+ if (TARGET_XTHEADVECTOR)
+ {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+ RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+ }
+
 if (riscv_v_ext_vls_mode_p (mode))
 {
 if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()
 compiler pick up either agnostic or undisturbed. Maybe we
 will have a compile option like -mprefer=agnostic to set
 this value???. */
- return TAIL_ANY;
+ return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy. */
@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()
 compiler pick up either agnostic or undisturbed. Maybe we
 will have a compile option like -mprefer=agnostic to set
 this value???. */
- return MASK_ANY;
+ return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx. */
@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
- if (!TARGET_VECTOR)
+ if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
 return false;
 if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+ valid for the function. */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+ tree arg;
+ unsigned i;
+
+ if (!return_type)
+ return false;
+
+ FOR_EACH_VEC_ELT (argument_types, i, arg)
+ if (!arg)
+ return false;
+
+ return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
 mode suffix at index PAIR && bi and predication suffix at index pred_idx. */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
 group.ops_infos.types[vec_type_idx].index);
 b.allocate_argument_types (function_instance, argument_types);
 b.apply_predication (function_instance, return_type, argument_types);
+
+ if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+ return;
+
 b.add_overloaded_function (function_instance, *group.shape);
 b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..f5f9000d89c 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
#include "riscv-vector-builtins.h"
#include "riscv-vector-builtins-shapes.h"
#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
using namespace riscv_vector;
@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {
#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \
 {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
#include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \
+ {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO) \
+ {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
};
/* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..bb463510dd2 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
 ZVKNHB_EXT, /* Crypto vector Zvknhb sub-ext */
 ZVKSED_EXT, /* Crypto vector Zvksed sub-ext */
 ZVKSH_EXT, /* Crypto vector Zvksh sub-ext */
+ XTHEADVECTOR_EXT, /* XTheadVector extension */
};
/* Enumerates the RVV operand types. */
@@ -233,7 +234,7 @@ struct function_group_info
 switch (ext_value)
 {
 case VECTOR_EXT:
- return TARGET_VECTOR;
+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);
 case ZVBB_EXT:
 return TARGET_ZVBB;
 case ZVBB_OR_ZVKB_EXT:
@@ -252,6 +253,8 @@ struct function_group_info
 return TARGET_ZVKSED;
 case ZVKSH_EXT:
 return TARGET_ZVKSH;
+ case XTHEADVECTOR_EXT:
+ return TARGET_XTHEADVECTOR;
 default:
 gcc_unreachable ();
 }
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32. */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64. */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index d3010bed8d8..18cc64b63e6 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
 if (riscv_v_ext_vector_mode_p (mode))
 {
+ if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
 poly_int64 nunits = GET_MODE_NUNITS (mode);
 poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
- if (TARGET_VECTOR)
+ if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
 return riscv_vector::preferred_simd_mode (mode);
 return word_mode;
@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
- if (TARGET_VECTOR)
+ if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
 return riscv_vector::autovectorize_vector_modes (modes, all);
 return default_autovectorize_vector_modes (modes, all);
@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 return false;
}
+/* Implements target hook vector_mode_supported_any_target_p. */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+ if (TARGET_XTHEADVECTOR)
+ return false;
+ return true;
+}
+
/* Initialize the GCC target structure. */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published
+ by the Free Software Foundation; either version 3, or (at your
+ option) any later version.
+
+ GCC is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+ License for more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short. It does
+ not define the RVV types and intrinsic functions directly in C and C++
+ code, but instead uses the following pragma to tell GCC to insert the
+ necessary type and function definitions itself. The net effect is the
+ same, and the file is a complete implementation of riscv_th_vector.h. */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
 $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
 $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
 $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+ $(srcdir)/config/riscv/thead-vector-builtins.h \
+ $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
 $(RISCV_BUILTINS_H)
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
+thead-vector-builtins.o: \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc \
+ $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+ $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+ $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+ gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+ rtx-vector-builder.h \
+ $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+ $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+ $(srcdir)/config/riscv/thead-vector-builtins.h \
+ $(RISCV_BUILTINS_H)
+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+ $(srcdir)/config/riscv/thead-vector-builtins.cc
+
riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
 $(SYSTEM_H) $(TM_H)
$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..a85ca24cb31
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,627 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#ifndef DEF_THEAD_RVV_FUNCTION
+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+/* Internal helper functions for gimple fold use. */
+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)
+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)
+
+/* 6. Configuration-Setting Instructions. */
+
+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)
+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)
+
+/* 7. Vector Loads and Stores. */
+
+// 7.4. Vector Unit-Stride Instructions
+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)
+
+// 7.5. Vector Strided Instructions
+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)
+
+// 7.6. Vector Indexed Instructions
+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)
+
+// 7.7. Unit-stride Fault-Only-First Loads
+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)
+
+// TODO: 7.8. Vector Load/Store Segment Instructions
+
+/* 11. Vector Integer Arithmetic Instructions. */
+
+// 11.1. Vector Single-Width Integer Add and Subtract
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)
+
+// 11.2. Vector Widening Integer Add/Subtract
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)
+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)
+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)
+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)
+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)
+
+// 11.3. Vector Integer Extension
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)
+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)
+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)
+
+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)
+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)
+
+// 11.5. Vector Bitwise Logical Instructions
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)
+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)
+
+// 11.6. Vector Single-Width Shift Instructions
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)
+
+// 11.7. Vector Narrowing Integer Right Shift Instructions
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)
+
+// 11.8. Vector Integer Compare Instructions
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)
+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)
+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)
+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)
+
+// 11.9. Vector Integer Min/Max Instructions
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)
+
+// 11.10. Vector Single-Width Integer Multiply Instructions
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)
+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)
+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)
+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)
+
+// 11.11. Vector Integer Divide Instructions
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)
+
+// 11.12. Vector Widening Integer Multiply Instructions
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)
+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)
+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)
+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)
+
+// 11.13. Vector Single-Width Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)
+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)
+
+// 11.14. Vector Widening Integer Multiply-Add Instructions
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)
+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)
+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)
+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)
+
+// 11.15. Vector Integer Merge Instructions
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)
+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)
+
+// 11.16 Vector Integer Move Instructions
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)
+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)
+
+/* 12. Vector Fixed-Point Arithmetic Instructions. */
+
+// 12.1. Vector Single-Width Saturating Add and Subtract
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)
+
+// 12.2. Vector Single-Width Averaging Add and Subtract
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)
+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)
+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)
+
+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)
+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)
+
+// 12.4. Vector Single-Width Scaling Shift Instructions
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)
+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)
+
+// 12.5. Vector Narrowing Fixed-Point Clip Instructions
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)
+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)
+
+/* 13. Vector Floating-Point Instructions. */
+
+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)
+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)
+
+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)
+
+// 13.5. Vector Widening Floating-Point Multiply
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)
+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)
+
+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)
+
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)
+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)
+
+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)
+
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)
+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)
+
+// 13.8. Vector Floating-Point Square-Root Instruction
+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction
+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)
+
+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction
+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)
+
+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)
+
+// 13.11. Vector Floating-Point MIN/MAX Instructions
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)
+
+// 13.12. Vector Floating-Point Sign-Injection Instructions
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)
+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)
+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)
+
+// 13.13. Vector Floating-Point Compare Instructions
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)
+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)
+
+// 13.14. Vector Floating-Point Classify Instruction
+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)
+
+// 13.15. Vector Floating-Point Merge Instruction
+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)
+
+// 13.16. Vector Floating-Point Move Instruction
+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)
+
+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)
+
+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)
+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)
+
+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions
+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)
+
+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)
+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)
+
+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions
+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)
+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)
+
+/* 14. Vector Reduction Operations. */
+
+// 14.1. Vector Single-Width Integer Reduction Instructions
+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)
+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)
+
+// 14.2. Vector Widening Integer Reduction Instructions
+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)
+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)
+
+// 14.3. Vector Single-Width Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)
+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)
+
+// 14.4. Vector Widening Floating-Point Reduction Instructions
+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)
+
+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)
+
+/* 15. Vector Mask Instructions. */
+
+// 15.1. Vector Mask-Register Logical Instructions
+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)
+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)
+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)
+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)
+// 15.2. Vector count population in mask vcpop.m
+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)
+// 15.3. vfirst find-first-set mask bit
+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)
+// 15.4. vmsbf.m set-before-first mask bit
+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.5. vmsif.m set-including-first mask bit
+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.6. vmsof.m set-only-first mask bit
+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)
+// 15.8. Vector Iota Instruction
+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)
+// 15.9. Vector Element Index Instruction
+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)
+
+/* 16. Vector Permutation Instructions. */
+
+// 16.1. Integer Scalar Move Instructions
+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)
+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)
+
+// 16.2. Floating-Point Scalar Move Instructions
+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)
+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)
+
+// 16.3. Vector Slide Instructions
+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)
+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)
+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)
+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)
+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)
+
+// 16.4. Vector Register Gather Instructions
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)
+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)
+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)
+
+// 16.5. Vector Compress Instruction
+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)
+
+/* Miscellaneous Vector Functions. */
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)
+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)
+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)
+
+// Tuple types
+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)
+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)
+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)
+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)
+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)
+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)
+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
+#undef DEF_THEAD_RVV_FUNCTION
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..9d84ed39937
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,746 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+ for GNU compiler.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+ Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+ Semiconductor Co., Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements vsetvl<mode> && vsetvlmax<mode>. */
+template<bool VLMAX_P>
+class th_vsetvl : public function_base
+{
+public:
+ bool apply_vl_p () const override
+ {
+ return false;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (VLMAX_P)
+ e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));
+ else
+ e.add_input_operand (0);
+
+ tree type = builtin_types[e.type.index].vector;
+ machine_mode mode = TYPE_MODE (type);
+
+ machine_mode inner_mode = GET_MODE_INNER (mode);
+ /* SEW. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));
+
+ /* LMUL. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_vlmul (mode), Pmode));
+
+ /* TAIL_ANY. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_tail_policy (), Pmode));
+
+ /* MASK_ANY. */
+ e.add_input_operand (Pmode,
+ gen_int_mode (get_prefer_mask_policy (), Pmode));
+ return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+ }
+};
+
+/* Implements
+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v
+ * codegen. */
+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>
+class th_loadstore : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return !STORE_P; }
+ bool apply_mask_policy_p () const override { return !STORE_P; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ if (STORE_P)
+ return CP_WRITE_MEMORY;
+ else
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ if (STORE_P || LST_TYPE == LST_INDEXED)
+ return true;
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (LST_TYPE == LST_INDEXED)
+ {
+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;
+ if (STORE_P)
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_store (unspec, e.vector_mode (),
+ e.index_mode ()));
+ else
+ {
+ unsigned src_eew_bitsize
+ = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));
+ unsigned dst_eew_bitsize
+ = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));
+ if (dst_eew_bitsize == src_eew_bitsize)
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_same_eew (
+ unspec, e.vector_mode ()));
+ }
+ else if (dst_eew_bitsize > src_eew_bitsize)
+ {
+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;
+ switch (factor)
+ {
+ case 2:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x2_greater_eew (
+ unspec, e.vector_mode ()));
+ case 4:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x4_greater_eew (
+ unspec, e.vector_mode ()));
+ case 8:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x8_greater_eew (
+ unspec, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;
+ switch (factor)
+ {
+ case 2:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x2_smaller_eew (
+ unspec, e.vector_mode ()));
+ case 4:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x4_smaller_eew (
+ unspec, e.vector_mode ()));
+ case 8:
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load_x8_smaller_eew (
+ unspec, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+ }
+ }
+ else if (LST_TYPE == LST_STRIDED)
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_th_strided_store (e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_th_strided_load (e.vector_mode ()));
+ }
+ else
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_th_store (e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_mov (e.vector_mode ()));
+ }
+ }
+};
+
+/* Implements vneg/vnot. */
+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>
+class th_unop : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));
+ }
+};
+
+/* Implements vnsrl/vnsra. */
+template<rtx_code CODE>
+class th_vnshift : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_wx:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (CODE, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vncvt. */
+class th_vncvt_x : public function_base
+{
+public:
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+ }
+};
+
+/* Implements vnclip/vnclipu. */
+template<int UNSPEC>
+class th_vnclip : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override { return true; }
+
+ bool may_require_vxrm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_wx:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));
+ case OP_TYPE_wv:
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vcpop. */
+class th_vcpop : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_popcount (e.vector_mode (), Pmode));
+ }
+};
+
+/* Implements vfirst. */
+class th_vfirst : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_ffs (e.vector_mode (), Pmode));
+ }
+};
+
+/* Implements vmadc. */
+class th_vmadc : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));
+ case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_th_madc_overflow (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (
+ code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vmsbc. */
+class th_vmsbc : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+ bool use_mask_predication_p () const override { return false; }
+ bool has_merge_operand_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ switch (e.op_info->op)
+ {
+ case OP_TYPE_vvm:
+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));
+ case OP_TYPE_vxm:
+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));
+ case OP_TYPE_vv:
+ return e.use_exact_insn (
+ code_for_pred_th_msbc_overflow (e.vector_mode ()));
+ case OP_TYPE_vx:
+ return e.use_exact_insn (
+ code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));
+ default:
+ gcc_unreachable ();
+ }
+ }
+};
+
+/* Implements vfncvt.x. */
+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_x : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));
+ }
+};
+
+template<enum frm_op_type FRM_OP = NO_FRM>
+class th_vfncvt_f : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ rtx expand (function_expander &e) const override
+ {
+ if (e.op_info->op == OP_TYPE_f_w)
+ return e.use_exact_insn (
+ code_for_pred_th_trunc (e.vector_mode ()));
+ if (e.op_info->op == OP_TYPE_x_w)
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));
+ if (e.op_info->op == OP_TYPE_xu_w)
+ return e.use_exact_insn (
+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));
+ gcc_unreachable ();
+ }
+};
+
+/* Implements floating-point reduction instructions. */
+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>
+class th_freducop : public function_base
+{
+public:
+ bool has_rounding_mode_operand_p () const override
+ {
+ return FRM_OP == HAS_FRM;
+ }
+
+ bool may_require_frm_p () const override { return true; }
+
+ bool apply_mask_policy_p () const override { return false; }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));
+ }
+};
+
+class th_vleff : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY | CP_WRITE_CSR;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ gimple *fold (gimple_folder &f) const override
+ {
+ return fold_fault_load (f);
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_contiguous_load_insn (
+ code_for_pred_th_fault_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vlseg.v. */
+class th_vlseg : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_unit_strided_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vsseg.v. */
+class th_vsseg : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_unit_strided_store (e.vector_mode ()));
+ }
+};
+
+/* Implements vlsseg.v. */
+class th_vlsseg : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_strided_load (e.vector_mode ()));
+ }
+};
+
+/* Implements vssseg.v. */
+class th_vssseg : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_strided_store (e.vector_mode ()));
+ }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_load : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_load (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+ }
+};
+
+template<int UNSPEC>
+class th_seg_indexed_store : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return false; }
+ bool apply_mask_policy_p () const override { return false; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_WRITE_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index) const override
+ {
+ return true;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_indexed_store (
+ UNSPEC, e.vector_mode (), e.index_mode ()));
+ }
+};
+
+/* Implements vlsegff.v. */
+class th_vlsegff : public function_base
+{
+public:
+ unsigned int call_properties (const function_instance &) const override
+ {
+ return CP_READ_MEMORY | CP_WRITE_CSR;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ return pred != PRED_TYPE_none;
+ }
+
+ gimple *fold (gimple_folder &f) const override
+ {
+ return fold_fault_load (f);
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ return e.use_exact_insn (
+ code_for_pred_th_fault_load (e.vector_mode ()));
+ }
+};
+
+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;
+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;
+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;
+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;
+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;
+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;
+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;
+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;
+static CONSTEXPR const th_unop<NEG> th_vneg_obj;
+static CONSTEXPR const th_unop<NOT> th_vnot_obj;
+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;
+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;
+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;
+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;
+static CONSTEXPR const th_vcpop th_vcpop_obj;
+static CONSTEXPR const th_vfirst th_vfirst_obj;
+static CONSTEXPR const th_vmadc th_vmadc_obj;
+static CONSTEXPR const th_vmsbc th_vmsbc_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;
+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;
+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;
+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;
+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;
+static CONSTEXPR const th_vleff th_vleff_obj;
+static CONSTEXPR const th_vlseg th_vlseg_obj;
+static CONSTEXPR const th_vsseg th_vsseg_obj;
+static CONSTEXPR const th_vlsseg th_vlsseg_obj;
+static CONSTEXPR const th_vssseg th_vssseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;
+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;
+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;
+static CONSTEXPR const th_vlsegff th_vlsegff_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+ of class <NAME>_obj. */
+#define BASE(NAME) \
+ namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vsetvl)
+BASE (th_vsetvlmax)
+BASE (th_vle)
+BASE (th_vse)
+BASE (th_vlm)
+BASE (th_vsm)
+BASE (th_vlse)
+BASE (th_vsse)
+BASE (th_vluxei8)
+BASE (th_vluxei16)
+BASE (th_vluxei32)
+BASE (th_vluxei64)
+BASE (th_vloxei8)
+BASE (th_vloxei16)
+BASE (th_vloxei32)
+BASE (th_vloxei64)
+BASE (th_vsuxei8)
+BASE (th_vsuxei16)
+BASE (th_vsuxei32)
+BASE (th_vsuxei64)
+BASE (th_vsoxei8)
+BASE (th_vsoxei16)
+BASE (th_vsoxei32)
+BASE (th_vsoxei64)
+BASE (th_vneg)
+BASE (th_vnot)
+BASE (th_vnsrl)
+BASE (th_vnsra)
+BASE (th_vncvt_x)
+BASE (th_vnclip)
+BASE (th_vnclipu)
+BASE (th_vcpop)
+BASE (th_vfirst)
+BASE (th_vmadc)
+BASE (th_vmsbc)
+BASE (th_vfncvt_x)
+BASE (th_vfncvt_x_frm)
+BASE (th_vfncvt_xu)
+BASE (th_vfncvt_xu_frm)
+BASE (th_vfncvt_f)
+BASE (th_vfncvt_f_frm)
+BASE (th_vfredusum)
+BASE (th_vfredusum_frm)
+BASE (th_vfredosum)
+BASE (th_vfredosum_frm)
+BASE (th_vfwredusum)
+BASE (th_vfwredusum_frm)
+BASE (th_vfwredosum)
+BASE (th_vfwredosum_frm)
+BASE (th_vleff)
+BASE (th_vlseg)
+BASE (th_vsseg)
+BASE (th_vlsseg)
+BASE (th_vssseg)
+BASE (th_vluxseg)
+BASE (th_vloxseg)
+BASE (th_vsuxseg)
+BASE (th_vsoxseg)
+BASE (th_vlsegff)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..d0bf00b8e81
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,92 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+ for GNU compiler.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+ Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+ Semiconductor Co., Ltd.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+
+ GCC is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3. If not see
+ <http://www.gnu.org/licenses/> <http://www.gnu.org/licenses/> >. */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vsetvl;
+extern const function_base *const th_vsetvlmax;
+extern const function_base *const th_vle;
+extern const function_base *const th_vse;
+extern const function_base *const th_vlm;
+extern const function_base *const th_vsm;
+extern const function_base *const th_vlse;
+extern const function_base *const th_vsse;
+extern const function_base *const th_vluxei8;
+extern const function_base *const th_vluxei16;
+extern const function_base *const th_vluxei32;
+extern const function_base *const th_vluxei64;
+extern const function_base *const th_vloxei8;
+extern const function_base *const th_vloxei16;
+extern const function_base *const th_vloxei32;
+extern const function_base *const th_vloxei64;
+extern const function_base *const th_vsuxei8;
+extern const function_base *const th_vsuxei16;
+extern const function_base *const th_vsuxei32;
+extern const function_base *const th_vsuxei64;
+extern const function_base *const th_vsoxei8;
+extern const function_base *const th_vsoxei16;
+extern const function_base *const th_vsoxei32;
+extern const function_base *const th_vsoxei64;
+extern const function_base *const th_vneg;
+extern const function_base *const th_vnot;
+extern const function_base *const th_vnsrl;
+extern const function_base *const th_vnsra;
+extern const function_base *const th_vncvt_x;
+extern const function_base *const th_vnclip;
+extern const function_base *const th_vnclipu;
+extern const function_base *const th_vcpop;
+extern const function_base *const th_vfirst;
+extern const function_base *const th_vmadc;
+extern const function_base *const th_vmsbc;
+extern const function_base *const th_vfncvt_x;
+extern const function_base *const th_vfncvt_x_frm;
+extern const function_base *const th_vfncvt_xu;
+extern const function_base *const th_vfncvt_xu_frm;
+extern const function_base *const th_vfncvt_f;
+extern const function_base *const th_vfncvt_f_frm;
+extern const function_base *const th_vfredusum;
+extern const function_base *const th_vfredusum_frm;
+extern const function_base *const th_vfredosum;
+extern const function_base *const th_vfredosum_frm;
+extern const function_base *const th_vfwredusum;
+extern const function_base *const th_vfwredusum_frm;
+extern const function_base *const th_vfwredosum;
+extern const function_base *const th_vfwredosum_frm;
+extern const function_base *const th_vleff;
+extern const function_base *const th_vlseg;
+extern const function_base *const th_vsseg;
+extern const function_base *const th_vlsseg;
+extern const function_base *const th_vssseg;
+extern const function_base *const th_vluxseg;
+extern const function_base *const th_vloxseg;
+extern const function_base *const th_vsuxseg;
+extern const function_base *const th_vsoxseg;
+extern const function_base *const th_vlsegff;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..072fb5e68e1
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,2574 @@
+(define_c_enum "unspec" [
+ UNSPEC_TH_VWLDST
+])
+
+(define_int_attr th_order [
+ (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")
+])
+
+(define_int_attr th_reduc_op [
+ (UNSPEC_REDUC_SUM "redsum")
+ (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")
+ (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")
+ (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")
+ (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")
+ (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")
+])
+
+(define_code_iterator neg_unop [neg])
+(define_code_iterator not_unop [not])
+
+(define_code_iterator any_float_unop_neg [neg])
+(define_code_iterator any_float_unop_abs [abs])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+ [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+ "TARGET_XTHEADVECTOR"
+ [(const_int 0)]
+ {
+ emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+ RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+ DONE;
+ })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+ [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand" "=vr,vr, m")
+ (unspec:V_VLS_VT
+ [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+ (match_operand 2 "vector_length_operand" " rK, rK, rK")
+ (match_operand 3 "const_1_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.v\t%0,%1
+ vle.v\t%0,%1
+ vse.v\t%1,%0"
+ "&& REG_P (operands[0]) && REG_P (operands[1])
+ && REGNO (operands[0]) == REGNO (operands[1])"
+ [(const_int 0)]
+ ""
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")
+ (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+ (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+ (set (attr "avl_type_idx") (const_int 3))
+ (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+ [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr,vr, m")
+ (unspec:VB
+ [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+ (match_operand 2 "vector_length_operand" " rK, rK, rK")
+ (match_operand 3 "const_1_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.v\t%0,%1
+ vle.v\t%0,%1
+ vse.v\t%1,%0"
+ "&& REG_P (operands[0]) && REG_P (operands[1])
+ && REGNO (operands[0]) == REGNO (operands[1])"
+ [(const_int 0)]
+ ""
+ [(set_attr "type" "vimov,vlds,vlds")
+ (set_attr "mode" "<MODE>")
+ (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+ (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+ (set (attr "avl_type_idx") (const_int 3))
+ (set_attr "vl_op_idx" "2")
+ (set (attr "sew") (const_int 8))
+ (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_expand "@pred_th_mov<mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V_VLS 3 "vector_move_operand")
+ (match_operand:V_VLS 2 "vector_merge_operand")))]
+ "TARGET_XTHEADVECTOR"
+ {})
+
+(define_insn_and_split "*pred_broadcast<mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vr, vr, vd, vd, vr, vr, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSI
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " r, r,Wdm,Wdm,Wdm,Wdm, r, r"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vmv.v.x\t%0,%3
+ vmv.v.x\t%0,%3
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero
+ vlse.v\t%0,%3,zero
+ vmv.s.x\t%0,%3
+ vmv.s.x\t%0,%3"
+ "(register_operand (operands[3], <VEL>mode)
+ || CONST_POLY_INT_P (operands[3]))
+ && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
+ [(set (match_dup 0)
+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)
+ (match_dup 5) (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSI (match_dup 3))
+ (match_dup 2)))]
+ {
+ gcc_assert (can_create_pseudo_p ());
+ if (CONST_POLY_INT_P (operands[3]))
+ {
+ rtx tmp = gen_reg_rtx (<VEL>mode);
+ emit_move_insn (tmp, operands[3]);
+ operands[3] = tmp;
+ }
+ rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),
+ GET_MODE_ALIGNMENT (<VEL>mode));
+ m = validize_mem (m);
+ emit_move_insn (m, operands[3]);
+ m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));
+ operands[3] = m;
+
+ /* For SEW = 64 in RV32 system, we expand vmv.s.x:
+ andi a2,a2,1
+ vsetvl zero,a2,e64
+ vlse64.v */
+ if (satisfies_constraint_Wb1 (operands[1]))
+ {
+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);
+ operands[1] = CONSTM1_RTX (<VM>mode);
+ }
+ }
+ [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_broadcast<mode>"
+ [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand" "=vr, vr, vr, vr, vr, vr, vr, vr")
+ (if_then_else:V_VLSF_ZVFHMIN
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (vec_duplicate:V_VLSF_ZVFHMIN
+ (match_operand:<VEL> 3 "direct_broadcast_operand" " f, f,Wdm,Wdm,Wdm,Wdm, f, f"))
+ (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vfmv.v.f\t%0,%3
+ vfmv.v.f\t%0,%3
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero,%1.t
+ vlse.v\t%0,%3,zero
+ vlse.v\t%0,%3,zero
+ vfmv.s.f\t%0,%3
+ vfmv.s.f\t%0,%3"
+ [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
+ (set_attr "mode" "<MODE>")])
+
+;; vle.v/vse.v,vmv.v.v
+(define_insn_and_split "*pred_th_mov<mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand" "=vr, vr, vd, m, vr, vr")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V_VLS 3 "reg_or_mem_operand" " m, m, m, vr, vr, vr")
+ (match_operand:V_VLS 2 "vector_merge_operand" " 0, vu, vu, vu, vu, 0")))]
+ "(TARGET_XTHEADVECTOR
+ && (register_operand (operands[0], <MODE>mode)
+ || register_operand (operands[3], <MODE>mode)))"
+ "@
+ vle.v\t%0,%3%p1
+ vle.v\t%0,%3
+ vle.v\t%0,%3,%1.t
+ vse.v\t%3,%0%p1
+ vmv.v.v\t%0,%3
+ vmv.v.v\t%0,%3"
+ "&& register_operand (operands[0], <MODE>mode)
+ && register_operand (operands[3], <MODE>mode)
+ && satisfies_constraint_vu (operands[2])
+ && INTVAL (operands[7]) == riscv_vector::VLMAX"
+ [(set (match_dup 0) (match_dup 3))]
+ ""
+ [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn_and_split "@pred_th_mov<mode>"
+ [(set (match_operand:VB_VLS 0 "nonimmediate_operand" "=vr, m, vr, vr, vr")
+ (if_then_else:VB_VLS
+ (unspec:VB_VLS
+ [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:VB_VLS 3 "vector_move_operand" " m, vr, vr, Wc0, Wc1")
+ (match_operand:VB_VLS 2 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ #
+ #
+ vmcpy.m\t%0,%3
+ vmclr.m\t%0
+ vmset.m\t%0"
+ "&& !reload_completed"
+ [(const_int 0)]
+ {
+ if ((MEM_P (operands[0]) || MEM_P (operands[3]))
+ || (REG_P (operands[0]) && REG_P (operands[3])
+ && INTVAL (operands[5]) == riscv_vector::VLMAX))
+ {
+ emit_move_insn (operands[0], operands[3]);
+ DONE;
+ }
+
+ FAIL;
+ }
+ [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_store<mode>"
+ [(set (match_operand:V 0 "memory_operand" "+m")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand:V 2 "register_operand" " vr")
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "vse.v\t%2,%0%p1"
+ [(set_attr "type" "vste")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 4))
+ (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_th_strided_load<mode>"
+ [(set (match_operand:V 0 "register_operand" "=vr, vr, vd, vr, vr, vd")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand:V 3 "memory_operand" " m, m, m, m, m, m")
+ (match_operand 4 "<V:stride_predicate>" "<V:stride_load_constraint>")] UNSPEC_STRIDED)
+ (match_operand:V 2 "vector_merge_operand" " 0, vu, vu, 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vlse.v\t%0,%3,%z4%p1
+ vlse.v\t%0,%3,%z4
+ vlse.v\t%0,%3,%z4,%1.t
+ vle.v\t%0,%3%p1
+ vle.v\t%0,%3
+ vle.v\t%0,%3,%1.t"
+ [(set_attr "type" "vlds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+ [(set (match_operand:V 0 "memory_operand" "+m, m")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand 2 "<V:stride_predicate>" "<V:stride_store_constraint>")
+ (match_operand:V 3 "register_operand" " vr, vr")] UNSPEC_STRIDED)
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "@
+ vsse.v\t%3,%0,%z2%p1
+ vse.v\t%3,%0%p1"
+ [(set_attr "type" "vsts")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+
+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"
+ [(set (match_operand:V 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ,rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
+ (match_operand:V 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+;; DEST eew is greater than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"
+ [(set (match_operand:VEEWEXT2 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT2
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT2
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT2 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"
+ [(set (match_operand:VEEWEXT4 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT4
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT4
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT4 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"
+ [(set (match_operand:VEEWEXT8 0 "register_operand" "=&vr, &vr")
+ (if_then_else:VEEWEXT8
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWEXT8
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:VEEWEXT8 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+;; DEST eew is smaller than SOURCE eew.
+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"
+ [(set (match_operand:VEEWTRUNC2 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC2
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC2
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC2 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"
+ [(set (match_operand:VEEWTRUNC4 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC4
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC4
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC4 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"
+ [(set (match_operand:VEEWTRUNC8 0 "register_operand" "=vd, vd, vr, vr, &vr, &vr")
+ (if_then_else:VEEWTRUNC8
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VEEWTRUNC8
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ, rJ, rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" " 0, 0, 0, 0, vr, vr")] ORDER)
+ (match_operand:VEEWTRUNC8 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxe.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vld<order>x")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:RATIO64 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO64:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:RATIO32 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO32:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:RATIO16 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO16:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:RATIO8 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO8:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:RATIO4 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO4:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:RATIO2 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO2:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO1 2 "register_operand" " vr")
+ (match_operand:RATIO1 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xe.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<RATIO1:MODE>")])
+
+(define_insn "@pred_th_popcount<VB:mode><P:mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (popcount:P
+ (unspec:VB
+ [(and:VB
+ (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+ (match_operand:VB 2 "register_operand" " vr"))
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
+ "TARGET_XTHEADVECTOR"
+ "vmpopc.m\t%0,%2%p1"
+ [(set_attr "type" "vmpop")
+ (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_ffs<VB:mode><P:mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (plus:P
+ (ffs:P
+ (unspec:VB
+ [(and:VB
+ (match_operand:VB 1 "vector_mask_operand" "vmWc1")
+ (match_operand:VB 2 "register_operand" " vr"))
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
+ (const_int -1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmfirst.m\t%0,%2%p1"
+ [(set_attr "type" "vmffs")
+ (set_attr "mode" "<VB:MODE>")])
+
+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VNCONVERT>
+ [(match_operand:V_VLSF 3 "register_operand" " vd, vd, vr, vr, vr, vr")] VFCVTS)
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.x<v_su>.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftoi")
+ (set_attr "mode" "<VNCONVERT>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<float_cvt><mode>"
+ [(set (match_operand:<VNCONVERT> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<VNCONVERT>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float:<VNCONVERT>
+ (match_operand:VWCONVERTI 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.f.x<u>.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtitof")
+ (set_attr "mode" "<VNCONVERT>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_<optab><mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd,&vd, &vr, &vr,&vd, &vr, &vr, &vr, vd, vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK,rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (any_shiftrt:VWEXTI
+ (match_operand:VWEXTI 3 "register_operand" " vr,vr, vr, vr, vd, vr, vr, vr, vd, vr, vr, vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " vd, vd, vr, vr,vr, vr, vr, vr, vk, vk, vk, vk")))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vd,vu, vr, vu,vu, vu, vu, vr, vu, vu, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vn<insn>.v%o4\t%0,%3,%v4%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_<optab><mode>_scalar"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (any_shiftrt:VWEXTI
+ (match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr")
+ (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vn<insn>.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnsrl.vx\t%0,%3,x0%p1"
+ [(set_attr "type" "vnshift")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_trunc<mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (float_truncate:<V_DOUBLE_TRUNC>
+ (match_operand:VWEXTF_ZVFHMIN 3 "register_operand" " vd, vd, vr, vr, vr, vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vfncvt.f.f.v\t%0,%3%p1"
+ [(set_attr "type" "vfncvtftof")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_fault_load<mode>"
+ [(set (match_operand:V 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V
+ [(match_operand:V 3 "memory_operand" " m, m, m, m")] UNSPEC_VLEFF)
+ (match_operand:V 2 "vector_merge_operand" " vu, 0, vu, 0")))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI
+ [(if_then_else:V
+ (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V [(match_dup 3)] UNSPEC_VLEFF)
+ (match_dup 2))] UNSPEC_MODIFY_VL))]
+ "TARGET_XTHEADVECTOR"
+ "vleff.v\t%0,%3%p1"
+ [(set_attr "type" "vldff")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "vlseg<nf>e.v\t%0,(%z3)%p1"
+ [(set_attr "type" "vlsegde")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_unit_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 2 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vsseg<nf>e.v\t%2,(%z1)%p0"
+ [(set_attr "type" "vssegte")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (match_operand 4 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_STRIDED)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"
+ [(set_attr "type" "vlsegds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_strided_store<mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand 2 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VT 3 "register_operand" " vr")
+ (mem:BLK (scratch))] UNSPEC_STRIDED))]
+ "TARGET_XTHEADVECTOR"
+ "vssseg<nf>e.v\t%3,(%z1),%z2%p0"
+ [(set_attr "type" "vssegts")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_fault_load<mode>"
+ [(set (match_operand:VT 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VT
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ, rJ")
+ (mem:BLK (scratch))] UNSPEC_VLEFF)
+ (match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI
+ [(if_then_else:VT
+ (unspec:<VM>
+ [(match_dup 1) (match_dup 4) (match_dup 5)
+ (match_dup 6) (match_dup 7)
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VT
+ [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
+ (match_dup 2))] UNSPEC_MODIFY_VL))]
+ "TARGET_XTHEADVECTOR"
+ "vlseg<nf>eff.v\t%0,(%z3)%p1"
+ [(set_attr "type" "vlsegdff")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"
+ [(set (match_operand:V1T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V1T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V1T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO64I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V1T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"
+ [(set (match_operand:V2T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V2T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V2T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO32I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V2T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"
+ [(set (match_operand:V4T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V4T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V4T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO16I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V4T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"
+ [(set (match_operand:V8T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V8T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V8T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO8I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V8T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"
+ [(set (match_operand:V16T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V16T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V16T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO4I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V16T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"
+ [(set (match_operand:V32T 0 "register_operand" "=&vr, &vr")
+ (if_then_else:V32T
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:V32T
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:RATIO2I 4 "register_operand" " vr, vr")] ORDER)
+ (match_operand:V32T 2 "vector_merge_operand" " vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vlxseg<nf>e.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vlsegd<order>x")
+ (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO64I 2 "register_operand" " vr")
+ (match_operand:V1T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V1T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO32I 2 "register_operand" " vr")
+ (match_operand:V2T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V2T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO16I 2 "register_operand" " vr")
+ (match_operand:V4T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V4T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO8I 2 "register_operand" " vr")
+ (match_operand:V8T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V8T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO4I 2 "register_operand" " vr")
+ (match_operand:V16T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V16T:MODE>")])
+
+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:RATIO2I 2 "register_operand" " vr")
+ (match_operand:V32T 3 "register_operand" " vr")] ORDER))]
+ "TARGET_XTHEADVECTOR"
+ "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";
+ [(set_attr "type" "vssegtux")
+ (set_attr "mode" "<V32T:MODE>")])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop_neg:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vfsgnjn.vv\t%0,%3,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop_abs:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vfsgnjx.vv\t%0,%3,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (not_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vnot.v\t%0,%3%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vrsub.vx\t%0,%3,x0%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_th_<optab><mode>"
+ [(set (match_operand:V_VLSF 0 "register_operand" "=vd, vd, vr, vr")
+ (if_then_else:V_VLSF
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (any_float_unop:V_VLSF
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
+ (match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "vf<insn>.v\t%0,%3%p1"
+ [(set_attr "type" "<float_insn_type>")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd,&vd, &vr, &vr,&vd, &vr, &vr, &vr, &vd, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK,rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(match_operand:VWEXTI 3 "register_operand" " vr,vr, vr, vr, vd, vr, vr, vr, vd, vr, vr, vr")
+ (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " vd, vd, vr, vr,vr, vr, vr, vr, vk, vk, vk, vk")] VNCLIP)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vd,vu, vr, vu,vu, vu, vu, vr, vu, vu, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"
+ [(set_attr "type" "vnclip")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"
+ [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand" "=&vd, &vd, &vr, &vr, &vr, &vr")
+ (if_then_else:<V_DOUBLE_TRUNC>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1,vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 9 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_DOUBLE_TRUNC>
+ [(match_operand:VWEXTI 3 "register_operand" " vd, vd, vr, vr, vr, vr")
+ (match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")] VNCLIP)
+ (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, vd, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR"
+ "vnclip<v_su>.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vnclip")
+ (set_attr "mode" "<V_DOUBLE_TRUNC>")])
+
+;; Float Reduction Sum (vfred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+ [(set (match_operand:<V_LMUL1> 0 "register_operand" "=vr,vr")
+ (unspec:<V_LMUL1>
+ [(unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_LMUL1> [
+ (match_operand:V_VLSF 3 "register_operand" " vr, vr")
+ (match_operand:<V_LMUL1> 4 "register_operand" " vr, vr")
+ ] ANY_FREDUC_SUM)
+ (match_operand:<V_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
+ "TARGET_XTHEADVECTOR"
+ "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+ [(set_attr "type" "vfred<order>")
+ (set_attr "mode" "<MODE>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)
+(define_insn "@pred_th_<th_reduc_op><mode>"
+ [(set (match_operand:<V_EXT_LMUL1> 0 "register_operand" "=&vr, &vr")
+ (unspec:<V_EXT_LMUL1>
+ [(unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" " rK, rK")
+ (match_operand 6 "const_int_operand" " i, i")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)
+ (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<V_EXT_LMUL1> [
+ (match_operand:VF_HS 3 "register_operand" " vr, vr")
+ (match_operand:<V_EXT_LMUL1> 4 "register_operand" " vr0, vr0")
+ ] ANY_FWREDUC_SUM)
+ (match_operand:<V_EXT_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
+ "TARGET_XTHEADVECTOR"
+ "vf<th_reduc_op>.vs\t%0,%3,%4%p1"
+ [(set_attr "type" "vfwred<order>")
+ (set_attr "mode" "<MODE>")
+ (set (attr "frm_mode")
+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])
+
+(define_insn "@pred_th_madc<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr")
+ (unspec:<VM>
+ [(plus:VI
+ (match_operand:VI 1 "register_operand" " %vr, vr, vr")
+ (match_operand:VI 2 "vector_arith_operand" "vrvi, vr, vi"))
+ (match_operand:<VM> 3 "register_operand" " vm, vm, vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.v%o2m\t%0,%1,%v2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI
+ (match_operand:VI 1 "register_operand" " vr")
+ (match_operand:VI 2 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vvm\t%0,%1,%2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "register_operand" " r"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (match_operand:<VM> 3 "register_operand")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4], operands[5]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+ DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_madc<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_expand "@pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (match_operand:<VM> 3 "register_operand")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[4],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4], operands[5]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[5])))
+ DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "*pred_th_msbc<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (match_operand:<VM> 3 "register_operand" " vm")
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vxm\t%0,%1,%z2,%3"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_th_madc<mode>_overflow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr")
+ (unspec:<VM>
+ [(plus:VI
+ (match_operand:VI 1 "register_operand" " %vr, vr, vr")
+ (match_operand:VI 2 "vector_arith_operand" "vrvi, vr, vi"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK, rK, rK")
+ (match_operand 4 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.v%o2\t%0,%1,%v2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI
+ (match_operand:VI 1 "register_operand" " vr")
+ (match_operand:VI 2 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vv\t%0,%1,%2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "@pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_QHS
+ (vec_duplicate:VI_QHS
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_QHS 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand")
+ (match_operand 4 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ riscv_vector::simm5_p (operands[2]),
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+ DONE;
+})
+
+(define_insn "*pred_th_madc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(plus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmadc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_expand "@pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_int_operand"))
+ (match_operand:VI_D 1 "register_operand"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand")
+ (match_operand 4 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+{
+ if (riscv_vector::sew64_scalar_helper (
+ operands,
+ /* scalar op */&operands[2],
+ /* vl */operands[3],
+ <MODE>mode,
+ false,
+ [] (rtx *operands, rtx boardcast_scalar) {
+ emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],
+ boardcast_scalar, operands[3], operands[4]));
+ },
+ (riscv_vector::avl_type) INTVAL (operands[4])))
+ DONE;
+})
+
+(define_insn "*pred_th_msbc<mode>_overflow_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr")
+ (unspec:<VM>
+ [(minus:VI_D
+ (vec_duplicate:VI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))
+ (match_operand:VI_D 1 "register_operand" " vr"))
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
+ "TARGET_XTHEADVECTOR"
+ "vmsbc.vx\t%0,%1,%z2"
+ [(set_attr "type" "vicalu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "3")
+ (set (attr "avl_type_idx") (const_int 4))])
+
+(define_insn "*th_vsetvl<mode>"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")
+ (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+ (match_dup 2)
+ (match_dup 3)] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+ (match_dup 3)
+ (match_dup 4)
+ (match_dup 5)] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\t%0,%1,e%2,%m3"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "<MODE>")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "*th_vsetvl_vtype_change_only"
+ [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+ [(match_operand 0 "const_int_operand" "i")
+ (match_operand 1 "const_int_operand" "i")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\tzero,zero,e%0,%m1"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "SI")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "*th_vsetvl_discard_result<mode>"
+ [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+ (match_operand 1 "const_int_operand" "i")
+ (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+ (match_dup 2)
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "vsetvli\tzero,%0,e%1,%m2"
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "<MODE>")
+ (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+ (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))
+ (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))
+ (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+ [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+ (match_operand 2 "const_int_operand" "i")
+ (match_operand 3 "const_int_operand" "i")
+ (match_operand 4 "const_int_operand" "i")
+ (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]
+ "TARGET_XTHEADVECTOR"
+ "#"
+ "&& epilogue_completed"
+ [(parallel
+ [(set (match_dup 0)
+ (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)
+ (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))
+ (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+ (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)
+ (match_dup 5)] UNSPEC_VSETVL))])]
+ ""
+ [(set_attr "type" "vsetvl")
+ (set_attr "mode" "SI")])
+
+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 3 "register_operand" " vr")
+ (match_operand:V_VLSI 4 "vector_arith_operand" "vrvi")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_arith_operand" " vr, vr, vi, vi")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_arith_operand" " vrvi, vrvi, vr, vr, vrvi, vr, vr, vrvi, vrvi")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "ltge_operator"
+ [(match_operand:V_VLSI 3 "register_operand" " vr")
+ (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_ltge<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_neg_arith_operand" " vr, vr, vj, vj")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_ltge<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "ltge_operator"
+ [(match_operand:V_VLSI 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSI 5 "vector_neg_arith_operand" " vrvj, vrvj, vr, vr, vrvj, vr, vr, vrvj, vrvj")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.v%o5\t%0,%4,%v5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 4 "register_operand" " r"))
+ (match_operand:V_VLSI_QHS 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r"))
+ (match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_QHS
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))
+ (match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 4 "register_operand" " r"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 4 "register_operand" " r"))
+ (match_operand:V_VLSI_D 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r"))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "register_operand" " r")))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r")))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "comparison_except_eqge_operator"
+ [(match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 4 "register_operand" " r")))
+ (match_operand:V_VLSI_D 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vms%B2.vx\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_extended_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r")))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSI_D
+ (sign_extend:<VEL>
+ (match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))
+ (match_operand:V_VLSI_D 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vms%B3.vx\t%0,%4,%5%p1"
+ [(set_attr "type" "vicmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr")
+ (match_operand:V_VLSF 5 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "signed_order_operator"
+ [(match_operand:V_VLSF 3 "register_operand" " vr")
+ (match_operand:V_VLSF 4 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vv\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")
+ (match_operand:V_VLSF 5 "register_operand" " vr, vr, vr, vr, vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, vr, vr, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vv\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "signed_order_operator"
+ [(match_operand:V_VLSF 3 "register_operand" " vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 4 "register_operand" " f"))])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vf\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_cmp<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_cmp<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "signed_order_operator"
+ [(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr")
+ (vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "register_operand" " 0")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 2 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 4 "register_operand" " f"))
+ (match_operand:V_VLSF 3 "register_operand" " vr")])
+ (match_dup 1)))]
+ "TARGET_XTHEADVECTOR"
+ "vmf%B2.vf\t%0,%3,%4,v0.t"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")
+ (set_attr "merge_op_idx" "1")
+ (set_attr "vl_op_idx" "5")
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+;; We don't use early-clobber for LMUL <= 1 to get better codegen.
+(define_insn "*pred_th_eqne<mode>_scalar"
+ [(set (match_operand:<VM> 0 "register_operand" "=&vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK")
+ (match_operand 7 "const_int_operand" " i, i")
+ (match_operand 8 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f"))
+ (match_operand:V_VLSF 4 "register_operand" " vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
+
+;; We use early-clobber for source LMUL > dest LMUL.
+(define_insn "*pred_th_eqne<mode>_scalar_narrow"
+ [(set (match_operand:<VM> 0 "register_operand" "=vm, &vr, &vr, &vr, &vr")
+ (if_then_else:<VM>
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " 0,vmWc1,vmWc1,vmWc1,vmWc1")
+ (match_operand 6 "vector_length_operand" " rK, rK, rK, rK, rK")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (match_operator:<VM> 3 "equality_operator"
+ [(vec_duplicate:V_VLSF
+ (match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))
+ (match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr, vr")])
+ (match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vr, vu, vr")))]
+ "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
+ "vmf%B3.vf\t%0,%4,%5%p1"
+ [(set_attr "type" "vfcmp")
+ (set_attr "mode" "<MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [
])
(define_mode_iterator VI [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@ (define_mode_iterator VI [
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@ (define_mode_iterator VF [
(define_mode_iterator VF_ZVFHMIN [
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [
])
(define_mode_iterator VEEWEXT2 [
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [
])
(define_mode_iterator VEEWEXT4 [
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [
])
(define_mode_iterator VEEWTRUNC2 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 (RVVM4SI "TARGET_64BIT")
 (RVVM2SI "TARGET_64BIT")
 (RVVM1SI "TARGET_64BIT")
- (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
- RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM2HI "TARGET_64BIT")
 (RVVM1HI "TARGET_64BIT")
- (RVVMF2HI "TARGET_64BIT")
- (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
 (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
 (RVVM1QI "TARGET_64BIT")
- (RVVMF2QI "TARGET_64BIT")
- (RVVMF4QI "TARGET_64BIT")
- (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+ (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [
])
(define_mode_iterator VFULLI [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [
])
(define_mode_iterator VI_QH [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [
])
(define_mode_iterator VI_QHS_NO_M8 [
- RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [
(define_mode_iterator VF_HS [
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
 (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [
 (RVVM4HF "TARGET_ZVFH")
 (RVVM2HF "TARGET_ZVFH")
 (RVVM1HF "TARGET_ZVFH")
- (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
 (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [
])
(define_mode_iterator V_VLSI_QHS [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
 (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
- (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM1DI "TARGET_VECTOR_ELEN_64")
- (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
- RVVMF4QI
- RVVMF2HI
+ (RVVMF4QI "!TARGET_XTHEADVECTOR")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR")
 RVVM1SI
 (RVVM2DI "TARGET_VECTOR_ELEN_64")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
- RVVMF2QI
+ (RVVMF2QI "!TARGET_XTHEADVECTOR")
 RVVM1HI
 RVVM2SI
 (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [
])
(define_mode_iterator RATIO64I [
- (RVVMF8QI "TARGET_MIN_VLEN > 32")
- (RVVMF4HI "TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
- RVVMF4QI
- RVVMF2HI
+ (RVVMF4QI "!TARGET_XTHEADVECTOR")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR")
 RVVM1SI
 (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
- RVVMF2QI
+ (RVVMF2QI "!TARGET_XTHEADVECTOR")
 RVVM1HI
 RVVM2SI
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [
])
(define_mode_iterator V_FRACT [
- RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
- (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [
 (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [
(define_mode_iterator VWCONVERTI [
 (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
- (RVVMF2SI "TARGET_ZVFH")
+ (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [
])
(define_mode_iterator VQEXTI [
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
 (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
- RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+ RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+ RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
- RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [
 (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
- (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
 (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [
(define_mode_iterator V_VLS_F_CONVERT_SI [
 (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
- (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
- (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
- (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+ (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+ (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
 (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
- (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+ (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
 (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
- (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+ (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@ (define_attr "sew" ""
 V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 16)
+ (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 32)
+ (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+ (if_then_else (match_test "TARGET_XTHEADVECTOR")
+ (const_int 64)
+ (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@ (define_attr "vlmul" ""
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@ (define_attr "ratio" ""
 vislide1up,vislide1down,vfslide1up,vfslide1down,\
 vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+ vlsegdff,vssegtux,vlsegdox,vlsegdux")
+ (match_test "TARGET_XTHEADVECTOR"))
+ (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"
(define_insn "*mov<mode>_whole"
 [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" " m,vr,vr"))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "@
 vl%m1re<sew>.v\t%0,%1
 vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"
(define_insn "*mov<mode>"
 [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "vmv1r.v\t%0,%1"
 [(set_attr "type" "vmov")
 (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"
 (any_extend:VWEXTI
 (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr"))
 (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf2\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"
 (any_extend:VQEXTI
 (match_operand:<V_QUAD_TRUNC> 3 "register_operand" "W43,W43,W43,W43,W86,W86,W86,W86, vr, vr"))
 (match_operand:VQEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf4\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"
 (any_extend:VOEXTI
 (match_operand:<V_OCT_TRUNC> 3 "register_operand" "W87,W87,W87,W87, vr, vr"))
 (match_operand:VOEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
 "v<sz>ext.vf8\t%0,%3%p1"
 [(set_attr "type" "vext")
 (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
 }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+ return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+ #ifndef __riscv_xtheadvector
+ #error "Not __riscv_xtheadvector"
+ #endif
+ }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
--
2.17.1

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 15:21               ` 回复:回复:[PATCH " joshua
@ 2023-12-20 15:29                 ` 钟居哲
  0 siblings, 0 replies; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 15:29 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu


Could you first send a separate patch first that only adds the theadvector intrinsics that can leverage current RVV intrinsics?

Then, we can be easily visit each following intrinsics that you can't leverage current intrinsics.

I expect the next patch is adding stride load/store that you can't leverage current intrinsics but same pattern as current patterns.

The final patch adds new intrinsics that current RVV intrinsics doesn't have like vlb...etc.






juzhe.zhong@rivai.ai



 



发件人: joshua



发送时间: 2023-12-20 23:21



收件人: 钟居哲; gcc-patches



抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu



主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector



Hi Juzhe,







All the patterns that I "copied" from current vector.md are necessary. The differences are beyond "th" prefix. They are actually different patterns since they generate totally different instructions apart from "th_" string.







We have already tried our best to eliminate extra patterns in thead-vector.md. You can refer to the difference list in our spec and find out whether these patterns are reduntant. 







Joshua



------------------------------------------------------------------



发件人:钟居哲 <juzhe.zhong@rivai.ai>



发送时间:2023年12月20日(星期三) 22:55



收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>



抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>



主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector







My first impression, you are just copying current vector.md with no patterns change,



just simply add "th_" string into pattern name.





It looks odd to me.





Take LLVM for example, even though the build up time for LLVM match table and tablegen is not an issue for now,



they still try hard to minimize the matchtable ,optimize the tablegen.





To me this patch just double the patterns, and potentially explode the patterns of RISC-V.





I think we should optimize thead vector patterns, eliminate the redundant unnecessary patterns to avoid affecting



the build up of GCC toolchain.









juzhe.zhong@rivai.ai

















发件人: joshua









发送时间: 2023-12-20 22:41









收件人: 钟居哲; gcc-patches









抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu









主题: 回复:回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector









Hi Juzhe,









Yes, XTheadVector does not have vfneg.v as a pseudo instruction for vfsgnjn.vv.









We have listed all the differences between vector and xtheadvector in our spec. You may refer to it.









https://github.com/T-head-Semi/thead-extension-spec/blob/master/xtheadvector.adoc









https://github.com/T-head-Semi/thead-extension-spec/commit/a0d8dd857e404011562379f2e7f5fae6f9a6bfdd

















Joshua









































------------------------------------------------------------------









发件人:钟居哲 <juzhe.zhong@rivai.ai>









发送时间:2023年12月20日(星期三) 22:27









收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>









抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>









主 题:Re: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





















Why do you add this ?













+(define_insn "@pred_th_<optab><mode>"









+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")









+ (if_then_else:V_VLSF









+   (unspec:<VM>









+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")









+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")









+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")









+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")









+      (reg:SI VL_REGNUM)









+      (reg:SI VTYPE_REGNUM)









+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)









+   (any_float_unop:V_VLSF









+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))









+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]









+  "TARGET_XTHEADVECTOR"









+  "vf<insn>.v\t%0,%3%p1"









+  [(set_attr "type" "<float_insn_type>")









+   (set_attr "mode" "<MODE>")









+   (set_attr "vl_op_idx" "4")









+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))









+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))









+   (set (attr "avl_type_idx") (const_int 7))









+   (set (attr "frm_mode")









+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])

















Theadvector is not th.vfneg.v ?

























juzhe.zhong@rivai.ai





































发件人: joshua





















发送时间: 2023-12-20 22:24





















收件人: 钟居哲; gcc-patches





















抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu





















主题: 回复:[PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





















Hi Juzhe,





































The patterns you supposed redundant are all necessary, because they generate different instructions from vector.





















Take pred_th_unit_strided_store as an example, xtheadvector do not have <sew> in load/store instructions,





















and we cannot reuse the same pattern as vector. That is why we define new function_base in thead-vector-builtins-functions.def.





































Joshua





































































































































------------------------------------------------------------------





















发件人:钟居哲 <juzhe.zhong@rivai.ai>





















发送时间:2023年12月20日(星期三) 22:00





















收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>





















抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>





















主 题:Re: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





































+// 7.6. Vector Indexed Instructions





















+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





































Why do you add these ?





































+(define_insn "@pred_th_unit_strided_store<mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+       (match_operand 3 "vector_length_operand"    "   rK")





















+       (match_operand 4 "const_int_operand"        "    i")





















+       (reg:SI VL_REGNUM)





















+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand:VT 2 "register_operand"         "   vr")





















+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsseg<nf>e.v\t%2,(%z1)%p0"





















+  [(set_attr "type" "vssegte")





















+   (set_attr "mode" "<MODE>")])





































These patterns are redundant just names are different.





















They should be removed.





















juzhe.zhong@rivai.ai





































From: Jun Sha (Joshua)





















Date: 2023-12-20 20:34





















To: gcc-patches





















CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu





















Subject: [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector





















This patch is to handle the differences in instruction generation





















between Vector and XTheadVector, adding th. prefix





















to all XTheadVector instructions is not included.





































For some vector patterns that cannot be avoided, we use





















!TARGET_XTHEADVECTOR to disable them in vector.md in order





















not to generate instructions that xtheadvector does not support,





















like vmv1r and vsext.vf2.





































gcc/ChangeLog:





































* config.gcc:  Add files for XTheadVector intrinsics.





















* config/riscv/autovec.md: Guard XTheadVector.





















* config/riscv/riscv-string.cc (expand_block_move):





















Guard XTheadVector.





















* config/riscv/riscv-v.cc (legitimize_move):





















New expansion.





















(get_prefer_tail_policy): Give specific value for tail.





















(get_prefer_mask_policy): Give specific value for mask.





















(vls_mode_valid_p): Avoid autovec.





















* config/riscv/riscv-vector-builtins-shapes.cc (check_type):





















(build_one): New function.





















* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):





















(DEF_THEAD_RVV_FUNCTION): Add new marcos.





















(check_required_extensions):





















(handle_pragma_vector):





















* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):





















(RVV_REQUIRE_XTHEADVECTOR):





















Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.





















(struct function_group_info):





















* config/riscv/riscv-vector-switch.def (ENTRY):





















Disable fractional mode for the XTheadVector extension.





















(TUPLE_ENTRY): Likewise.





















* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.





















* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):





















Guard XTheadVector.





















(riscv_v_adjust_bytesize): Likewise.





















(riscv_preferred_simd_mode): Likewsie.





















(riscv_autovectorize_vector_modes): Likewise.





















(riscv_vector_mode_supported_any_target_p): Likewise.





















(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.





















* config/riscv/t-riscv: Add new files.





















* config/riscv/vector-iterators.md: Remove fractional LMUL.





















* config/riscv/vector.md: Include thead-vector.md.





















* config/riscv/riscv_th_vector.h: New file.





















* config/riscv/thead-vector-builtins-functions.def: New file.





















* config/riscv/thead-vector-builtins.cc: New file.





















* config/riscv/thead-vector-builtins.h: New file.





















* config/riscv/thead-vector.md: New file.





































gcc/testsuite/ChangeLog:





































* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.





















* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.





















* lib/target-supports.exp: Add target for XTheadVector.





































Co-authored-by: Jin Ma <jinma@linux.alibaba.com>





















Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>





















Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>





















---





















gcc/config.gcc                                |    4 +-





















gcc/config/riscv/autovec.md                   |    2 +-





















gcc/config/riscv/predicates.md                |    8 +-





















gcc/config/riscv/riscv-string.cc              |    3 +





















gcc/config/riscv/riscv-v.cc                   |   13 +-





















.../riscv/riscv-vector-builtins-shapes.cc     |   23 +





















gcc/config/riscv/riscv-vector-builtins.cc     |    7 +





















gcc/config/riscv/riscv-vector-builtins.h      |    5 +-





















gcc/config/riscv/riscv-vector-switch.def      |  150 +-





















gcc/config/riscv/riscv.cc                     |   20 +-





















gcc/config/riscv/riscv_th_vector.h            |   49 +





















gcc/config/riscv/t-riscv                      |   16 +





















.../riscv/thead-vector-builtins-functions.def |  627 ++++





















gcc/config/riscv/thead-vector-builtins.cc     |  746 +++++





















gcc/config/riscv/thead-vector-builtins.h      |   92 +





















gcc/config/riscv/thead-vector.md              | 2574 +++++++++++++++++





















gcc/config/riscv/vector-iterators.md          |  186 +-





















gcc/config/riscv/vector.md                    |   36 +-





















.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-





















.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-





















gcc/testsuite/lib/target-supports.exp         |   12 +





















21 files changed, 4386 insertions(+), 191 deletions(-)





















create mode 100644 gcc/config/riscv/riscv_th_vector.h





















create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def





















create mode 100644 gcc/config/riscv/thead-vector-builtins.cc





















create mode 100644 gcc/config/riscv/thead-vector-builtins.h





















create mode 100644 gcc/config/riscv/thead-vector.md





































diff --git a/gcc/config.gcc b/gcc/config.gcc





















index f0676c830e8..4478395ab77 100644





















--- a/gcc/config.gcc





















+++ b/gcc/config.gcc





















@@ -547,9 +547,9 @@ riscv*)





















extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"





















extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"





















extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"





















- extra_objs="${extra_objs} thead.o riscv-target-attr.o"





















+ extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"





















d_target_objs="riscv-d.o"





















- extra_headers="riscv_vector.h"





















+ extra_headers="riscv_vector.h riscv_th_vector.h"





















target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"





















target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"





















;;





















diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md





















index 8b8a92f10a1..1fac56c7095 100644





















--- a/gcc/config/riscv/autovec.md





















+++ b/gcc/config/riscv/autovec.md





















@@ -2579,7 +2579,7 @@ (define_expand "rawmemchr<ANYI:mode>"





















  [(match_operand      0 "register_operand")





















    (match_operand      1 "memory_operand")





















    (match_operand:ANYI 2 "const_int_operand")]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  {





















    riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],





















  operands[2]);





















diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md





















index 1a3a4f1ecbb..d910367e59c 100644





















--- a/gcc/config/riscv/predicates.md





















+++ b/gcc/config/riscv/predicates.md





















@@ -64,8 +64,9 @@ (define_predicate "csr_operand"





















        (match_operand 0 "register_operand")))





















(define_predicate "vector_csr_operand"





















-  (ior (match_operand 0 "const_csr_operand")





















-       (match_operand 0 "register_operand")))





















+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")





















+      (match_operand 0 "const_csr_operand"))





















+    (match_operand 0 "register_operand")))





















;; V has 32-bit unsigned immediates.  This happens to be the same constraint as





















;; the csr_operand, but it's not CSR related.





















@@ -425,7 +426,8 @@ (define_predicate "immediate_register_operand"





















;; Predicates for the V extension.





















(define_special_predicate "vector_length_operand"





















  (ior (match_operand 0 "pmode_register_operand")





















-       (match_operand 0 "const_csr_operand")))





















+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")





















+    (match_operand 0 "const_csr_operand"))))





















(define_special_predicate "autovec_length_operand"





















  (ior (match_operand 0 "pmode_register_operand")





















diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc





















index 11c1f74d0b3..ec8f3486fd8 100644





















--- a/gcc/config/riscv/riscv-string.cc





















+++ b/gcc/config/riscv/riscv-string.cc





















@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)





















bnez a2, loop                   # Any more?





















ret                             # Return





















  */





















+   if (TARGET_XTHEADVECTOR)





















+    return false;





















+





















  gcc_assert (TARGET_VECTOR);





















  HOST_WIDE_INT potential_ew





















diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc





















index 486f5deb296..710332e17db 100644





















--- a/gcc/config/riscv/riscv-v.cc





















+++ b/gcc/config/riscv/riscv-v.cc





















@@ -1444,6 +1444,13 @@ legitimize_move (rtx dest, rtx *srcp)





















      return true;





















    }





















+  if (TARGET_XTHEADVECTOR)





















+      {





















+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,





















+   RVV_VLMAX, GEN_INT(VLMAX)));





















+ return true;





















+      }





















+





















  if (riscv_v_ext_vls_mode_p (mode))





















    {





















      if (GET_MODE_NUNITS (mode).to_constant () <= 31)





















@@ -1693,7 +1700,7 @@ get_prefer_tail_policy ()





















      compiler pick up either agnostic or undisturbed. Maybe we





















      will have a compile option like -mprefer=agnostic to set





















      this value???.  */





















-  return TAIL_ANY;





















+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;





















}





















/* Get prefer mask policy.  */





















@@ -1704,7 +1711,7 @@ get_prefer_mask_policy ()





















      compiler pick up either agnostic or undisturbed. Maybe we





















      will have a compile option like -mprefer=agnostic to set





















      this value???.  */





















-  return MASK_ANY;





















+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;





















}





















/* Get avl_type rtx.  */





















@@ -4294,7 +4301,7 @@ cmp_lmul_gt_one (machine_mode mode)





















bool





















vls_mode_valid_p (machine_mode vls_mode)





















{





















-  if (!TARGET_VECTOR)





















+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)





















    return false;





















  if (riscv_autovec_preference == RVV_SCALABLE)





















diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc





















index 4a754e0228f..6b49404a1fa 100644





















--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc





















+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc





















@@ -33,6 +33,25 @@





















namespace riscv_vector {





















+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are





















+   valid for the function.  */





















+





















+static bool





















+check_type (tree return_type, vec<tree> &argument_types)





















+{





















+  tree arg;





















+  unsigned i;





















+





















+  if (!return_type)





















+    return false;





















+





















+  FOR_EACH_VEC_ELT (argument_types, i, arg)





















+    if (!arg)





















+      return false;





















+





















+  return true;





















+}





















+





















/* Add one function instance for GROUP, using operand suffix at index OI,





















    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */





















static void





















@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,





















    group.ops_infos.types[vec_type_idx].index);





















  b.allocate_argument_types (function_instance, argument_types);





















  b.apply_predication (function_instance, return_type, argument_types);





















+





















+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))





















+    return;





















+





















  b.add_overloaded_function (function_instance, *group.shape);





















  b.add_unique_function (function_instance, (*group.shape), return_type,





















argument_types);





















diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc





















index 4e2c66c2de7..f5f9000d89c 100644





















--- a/gcc/config/riscv/riscv-vector-builtins.cc





















+++ b/gcc/config/riscv/riscv-vector-builtins.cc





















@@ -51,6 +51,7 @@





















#include "riscv-vector-builtins.h"





















#include "riscv-vector-builtins-shapes.h"





















#include "riscv-vector-builtins-bases.h"





















+#include "thead-vector-builtins.h"





















using namespace riscv_vector;





















@@ -2687,6 +2688,12 @@ static function_group_info function_groups[] = {





















#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \





















  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},





















#include "riscv-vector-builtins-functions.def"





















+#undef DEF_RVV_FUNCTION





















+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \





















+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},





















+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)             \





















+  {#NAME, &bases::BASE, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},





















+#include "thead-vector-builtins-functions.def"





















};





















/* The RVV types, with their built-in





















diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h





















index 4f38c09d73d..bb463510dd2 100644





















--- a/gcc/config/riscv/riscv-vector-builtins.h





















+++ b/gcc/config/riscv/riscv-vector-builtins.h





















@@ -123,6 +123,7 @@ enum required_ext





















  ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */





















  ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */





















  ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */





















+  XTHEADVECTOR_EXT,   /* XTheadVector extension */





















};





















/* Enumerates the RVV operand types.  */





















@@ -233,7 +234,7 @@ struct function_group_info





















    switch (ext_value)





















    {





















      case VECTOR_EXT:





















-        return TARGET_VECTOR;





















+ return (TARGET_VECTOR && !TARGET_XTHEADVECTOR);





















      case ZVBB_EXT:





















        return TARGET_ZVBB;





















      case ZVBB_OR_ZVKB_EXT:





















@@ -252,6 +253,8 @@ struct function_group_info





















        return TARGET_ZVKSED;





















      case ZVKSH_EXT:





















        return TARGET_ZVKSH;





















+      case XTHEADVECTOR_EXT:





















+ return TARGET_XTHEADVECTOR;





















      default:





















        gcc_unreachable ();





















    }





















diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def





















index 5c9f9bcbc3e..f7a66b34bae 100644





















--- a/gcc/config/riscv/riscv-vector-switch.def





















+++ b/gcc/config/riscv/riscv-vector-switch.def





















@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.





















#endif





















/* Disable modes if TARGET_MIN_VLEN == 32.  */





















-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)





















-ENTRY (RVVMF32BI, true, LMUL_F4, 32)





















-ENTRY (RVVMF16BI, true, LMUL_F2, 16)





















+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)





















+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)





















+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)





















ENTRY (RVVMF8BI, true, LMUL_1, 8)





















ENTRY (RVVMF4BI, true, LMUL_2, 4)





















ENTRY (RVVMF2BI, true, LMUL_4, 2)





















@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)





















ENTRY (RVVM4QI, true, LMUL_4, 2)





















ENTRY (RVVM2QI, true, LMUL_2, 4)





















ENTRY (RVVM1QI, true, LMUL_1, 8)





















-ENTRY (RVVMF2QI, true, LMUL_F2, 16)





















-ENTRY (RVVMF4QI, true, LMUL_F4, 32)





















-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)





















+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)





















+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)





















+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32.  */





















ENTRY (RVVM8HI, true, LMUL_8, 2)





















ENTRY (RVVM4HI, true, LMUL_4, 4)





















ENTRY (RVVM2HI, true, LMUL_2, 8)





















ENTRY (RVVM1HI, true, LMUL_1, 16)





















-ENTRY (RVVMF2HI, true, LMUL_F2, 32)





















-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)





















+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)





















+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */





















ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)





















ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)





















ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)





















ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)





















-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)





















-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)





















+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)





















+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32.  */





















ENTRY (RVVM8SI, true, LMUL_8, 4)





















ENTRY (RVVM4SI, true, LMUL_4, 8)





















ENTRY (RVVM2SI, true, LMUL_2, 16)





















ENTRY (RVVM1SI, true, LMUL_1, 32)





















-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)





















+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)





















/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */





















ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)





















ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)





















ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)





















ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)





















-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)





















+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)





















/* Disable modes if !TARGET_VECTOR_ELEN_64.  */





















ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)





















@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)





















#endif





















TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)





















TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)





















TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)





















TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)





















TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)





















-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)





















-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)





















-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)





















+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)





















+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)





















+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)





















TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)





















-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)





















+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)





















TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)





















TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)





















TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)





















-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)





















+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)





















TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)





















TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)





















diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc





















index d3010bed8d8..18cc64b63e6 100644





















--- a/gcc/config/riscv/riscv.cc





















+++ b/gcc/config/riscv/riscv.cc





















@@ -1389,6 +1389,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)





















{





















  if (riscv_v_ext_vector_mode_p (mode))





















    {





















+      if (TARGET_XTHEADVECTOR)





















+ return BYTES_PER_RISCV_VECTOR;





















+





















      poly_int64 nunits = GET_MODE_NUNITS (mode);





















      poly_int64 mode_size = GET_MODE_SIZE (mode);





















@@ -9888,7 +9891,7 @@ riscv_use_divmod_expander (void)





















static machine_mode





















riscv_preferred_simd_mode (scalar_mode mode)





















{





















-  if (TARGET_VECTOR)





















+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)





















    return riscv_vector::preferred_simd_mode (mode);





















  return word_mode;





















@@ -10239,7 +10242,7 @@ riscv_mode_priority (int, int n)





















unsigned int





















riscv_autovectorize_vector_modes (vector_modes *modes, bool all)





















{





















-  if (TARGET_VECTOR)





















+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)





















    return riscv_vector::autovectorize_vector_modes (modes, all);





















  return default_autovectorize_vector_modes (modes, all);





















@@ -10422,6 +10425,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)





















  return false;





















}





















+/* Implements target hook vector_mode_supported_any_target_p.  */





















+





















+static bool





















+riscv_vector_mode_supported_any_target_p (machine_mode mode)





















+{





















+  if (TARGET_XTHEADVECTOR)





















+    return false;





















+  return true;





















+}





















+





















/* Initialize the GCC target structure.  */





















#undef TARGET_ASM_ALIGNED_HI_OP





















#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"





















@@ -10765,6 +10778,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)





















#undef TARGET_PREFERRED_ELSE_VALUE





















#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value





















+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P





















+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p





















+





















struct gcc_target targetm = TARGET_INITIALIZER;





















#include "gt-riscv.h"





















diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h





















new file mode 100644





















index 00000000000..6f47e0c90a4





















--- /dev/null





















+++ b/gcc/config/riscv/riscv_th_vector.h





















@@ -0,0 +1,49 @@





















+/* RISC-V 'XTheadVector' Extension intrinsics include file.





















+   Copyright (C) 2022-2023 Free Software Foundation, Inc.





















+





















+   This file is part of GCC.





















+





















+   GCC is free software; you can redistribute it and/or modify it





















+   under the terms of the GNU General Public License as published





















+   by the Free Software Foundation; either version 3, or (at your





















+   option) any later version.





















+





















+   GCC is distributed in the hope that it will be useful, but WITHOUT





















+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY





















+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public





















+   License for more details.





















+





















+   Under Section 7 of GPL version 3, you are granted additional





















+   permissions described in the GCC Runtime Library Exception, version





















+   3.1, as published by the Free Software Foundation.





















+





















+   You should have received a copy of the GNU General Public License and





















+   a copy of the GCC Runtime Library Exception along with this program;





















+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see





















+   <http://www.gnu.org/licenses/>.  */





















+





















+#ifndef __RISCV_TH_VECTOR_H





















+#define __RISCV_TH_VECTOR_H





















+





















+#include <stdint.h>





















+#include <stddef.h>





















+





















+#ifndef __riscv_xtheadvector





















+#error "XTheadVector intrinsics require the xtheadvector extension."





















+#else





















+#ifdef __cplusplus





















+extern "C" {





















+#endif





















+





















+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does





















+   not define the RVV types and intrinsic functions directly in C and C++





















+   code, but instead uses the following pragma to tell GCC to insert the





















+   necessary type and function definitions itself.  The net effect is the





















+   same, and the file is a complete implementation of riscv_th_vector.h.  */





















+#pragma riscv intrinsic "vector"





















+





















+#ifdef __cplusplus





















+}





















+#endif // __cplusplus





















+#endif // __riscv_xtheadvector





















+#endif // __RISCV_TH_ECTOR_H





















diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv





















index 067771e3c97..09512092056 100644





















--- a/gcc/config/riscv/t-riscv





















+++ b/gcc/config/riscv/t-riscv





















@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \





















  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \





















  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \





















  $(srcdir)/config/riscv/riscv-vector-builtins-types.def \





















+  $(srcdir)/config/riscv/thead-vector-builtins.h \





















+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \





















  $(RISCV_BUILTINS_H)





















$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















$(srcdir)/config/riscv/riscv-vector-builtins.cc





















@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \





















$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc





















+thead-vector-builtins.o: \





















+  $(srcdir)/config/riscv/thead-vector-builtins.cc \





















+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \





















+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \





















+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \





















+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \





















+  rtx-vector-builder.h \





















+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \





















+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \





















+  $(srcdir)/config/riscv/thead-vector-builtins.h \





















+  $(RISCV_BUILTINS_H)





















+ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















+ $(srcdir)/config/riscv/thead-vector-builtins.cc





















+





















riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \





















  $(SYSTEM_H) $(TM_H)





















$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \





















diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def





















new file mode 100644





















index 00000000000..a85ca24cb31





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector-builtins-functions.def





















@@ -0,0 +1,627 @@





















+#ifndef DEF_RVV_FUNCTION





















+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)





















+#endif





















+





















+#ifndef DEF_THEAD_RVV_FUNCTION





















+#define DEF_THEAD_RVV_FUNCTION(NAME, BASE, SHAPE, PREDS, OPS_INFO)





















+#endif





















+





















+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT





















+/* Internal helper functions for gimple fold use.  */





















+DEF_RVV_FUNCTION (read_vl, read_vl, none_preds, p_none_void_ops)





















+DEF_RVV_FUNCTION (vlenb, vlenb, none_preds, ul_none_void_ops)





















+





















+/* 6. Configuration-Setting Instructions.  */





















+





















+DEF_THEAD_RVV_FUNCTION (vsetvl, th_vsetvl, vsetvl, none_preds, i_none_size_size_ops)





















+DEF_THEAD_RVV_FUNCTION (vsetvlmax, th_vsetvlmax, vsetvlmax, none_preds, i_none_size_void_ops)





















+





















+/* 7. Vector Loads and Stores. */





















+





















+// 7.4. Vector Unit-Stride Instructions





















+DEF_THEAD_RVV_FUNCTION (vle, th_vle, loadstore, full_preds, all_v_scalar_const_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vse, th_vse, loadstore, none_m_preds, all_v_scalar_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vlm, th_vlm, loadstore, none_preds, b_v_scalar_const_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vsm, th_vsm, loadstore, none_preds, b_v_scalar_ptr_ops)





















+





















+// 7.5. Vector Strided Instructions





















+DEF_THEAD_RVV_FUNCTION (vlse, th_vlse, loadstore, full_preds, all_v_scalar_const_ptr_ptrdiff_ops)





















+DEF_THEAD_RVV_FUNCTION (vsse, th_vsse, loadstore, none_m_preds, all_v_scalar_ptr_ptrdiff_ops)





















+





















+// 7.6. Vector Indexed Instructions





















+DEF_THEAD_RVV_FUNCTION (vluxei8, th_vluxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei16, th_vluxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei32, th_vluxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxei64, th_vluxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei8, th_vloxei8, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei16, th_vloxei16, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei32, th_vloxei32, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxei64, th_vloxei64, indexed_loadstore, full_preds, all_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei8, th_vsuxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei16, th_vsuxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei32, th_vsuxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxei64, th_vsuxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei8, th_vsoxei8, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei16, th_vsoxei16, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei32, th_vsoxei32, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxei64, th_vsoxei64, indexed_loadstore, none_m_preds, all_v_scalar_ptr_eew64_index_ops)





















+





















+// 7.7. Unit-stride Fault-Only-First Loads





















+DEF_THEAD_RVV_FUNCTION (vleff, th_vleff, fault_load, full_preds, all_v_scalar_const_ptr_size_ptr_ops)





















+





















+// TODO: 7.8. Vector Load/Store Segment Instructions





















+





















+/* 11. Vector Integer Arithmetic Instructions.  */





















+





















+// 11.1. Vector Single-Width Integer Add and Subtract





















+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vadd, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vsub, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vrsub, alu, full_preds, iu_vvx_ops)





















+DEF_THEAD_RVV_FUNCTION (vneg, th_vneg, alu, full_preds, iu_v_ops)





















+





















+// 11.2. Vector Widening Integer Add/Subtract





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvv_ops)





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wvx_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvv_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wvx_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvv_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wvx_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvv_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wvx_ops)





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwv_ops)





















+DEF_RVV_FUNCTION (vwaddu, widen_alu, full_preds, u_wwx_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwv_ops)





















+DEF_RVV_FUNCTION (vwsubu, widen_alu, full_preds, u_wwx_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwv_ops)





















+DEF_RVV_FUNCTION (vwadd, widen_alu, full_preds, i_wwx_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwv_ops)





















+DEF_RVV_FUNCTION (vwsub, widen_alu, full_preds, i_wwx_ops)





















+DEF_RVV_FUNCTION (vwcvt_x, alu, full_preds, i_x_x_v_ops)





















+DEF_RVV_FUNCTION (vwcvtu_x, alu, full_preds, u_x_x_v_ops)





















+





















+// 11.3. Vector Integer Extension





















+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf2_ops)





















+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf4_ops)





















+DEF_RVV_FUNCTION (vzext, widen_alu, full_preds, u_vf8_ops)





















+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf2_ops)





















+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf4_ops)





















+DEF_RVV_FUNCTION (vsext, widen_alu, full_preds, i_vf8_ops)





















+





















+// 11.4. Vector Integer Add-with-Carry/Subtract-with-Borrow Instructions





















+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvvm_ops)





















+DEF_RVV_FUNCTION (vadc, no_mask_policy, none_tu_preds, iu_vvxm_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvvm_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvxm_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmadc, return_mask, none_preds, iu_mvx_ops)





















+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvvm_ops)





















+DEF_RVV_FUNCTION (vsbc, no_mask_policy, none_tu_preds, iu_vvxm_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvvm_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvxm_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmsbc, return_mask, none_preds, iu_mvx_ops)





















+





















+// 11.5. Vector Bitwise Logical Instructions





















+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vand, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vor, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vxor, alu, full_preds, iu_vvx_ops)





















+DEF_THEAD_RVV_FUNCTION (vnot, th_vnot, alu, full_preds, iu_v_ops)





















+





















+// 11.6. Vector Single-Width Shift Instructions





















+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vsll, alu, full_preds, iu_shift_vvx_ops)





















+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vsra, alu, full_preds, i_shift_vvx_ops)





















+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vsrl, alu, full_preds, u_shift_vvx_ops)





















+





















+// 11.7. Vector Narrowing Integer Right Shift Instructions





















+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwv_ops)





















+DEF_THEAD_RVV_FUNCTION (vnsrl, th_vnsrl, narrow_alu, full_preds, u_narrow_shift_vwx_ops)





















+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwv_ops)





















+DEF_THEAD_RVV_FUNCTION (vnsra, th_vnsra, narrow_alu, full_preds, i_narrow_shift_vwx_ops)





















+DEF_THEAD_RVV_FUNCTION (vncvt_x, th_vncvt_x, narrow_alu, full_preds, iu_trunc_ops)





















+





















+// 11.8. Vector Integer Compare Instructions





















+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmseq, return_mask, none_m_mu_preds, iu_mvx_ops)





















+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvv_ops)





















+DEF_RVV_FUNCTION (vmsne, return_mask, none_m_mu_preds, iu_mvx_ops)





















+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsltu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmslt, return_mask, none_m_mu_preds, i_mvx_ops)





















+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsleu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmsle, return_mask, none_m_mu_preds, i_mvx_ops)





















+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsgtu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmsgt, return_mask, none_m_mu_preds, i_mvx_ops)





















+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvv_ops)





















+DEF_RVV_FUNCTION (vmsgeu, return_mask, none_m_mu_preds, u_mvx_ops)





















+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvv_ops)





















+DEF_RVV_FUNCTION (vmsge, return_mask, none_m_mu_preds, i_mvx_ops)





















+





















+// 11.9. Vector Integer Min/Max Instructions





















+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vminu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vmin, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vmaxu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vmax, alu, full_preds, i_vvx_ops)





















+





















+// 11.10. Vector Single-Width Integer Multiply Instructions





















+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvv_ops)





















+DEF_RVV_FUNCTION (vmul, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvv_ops)





















+DEF_RVV_FUNCTION (vmulh, alu, full_preds, full_v_i_vvx_ops)





















+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvv_ops)





















+DEF_RVV_FUNCTION (vmulhu, alu, full_preds, full_v_u_vvx_ops)





















+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvv_ops)





















+DEF_RVV_FUNCTION (vmulhsu, alu, full_preds, full_v_i_su_vvx_ops)





















+





















+// 11.11. Vector Integer Divide Instructions





















+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vdivu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vdiv, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vremu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vrem, alu, full_preds, i_vvx_ops)





















+





















+// 11.12. Vector Widening Integer Multiply Instructions





















+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvv_ops)





















+DEF_RVV_FUNCTION (vwmul, alu, full_preds, i_wvx_ops)





















+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvv_ops)





















+DEF_RVV_FUNCTION (vwmulu, alu, full_preds, u_wvx_ops)





















+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvv_ops)





















+DEF_RVV_FUNCTION (vwmulsu, alu, full_preds, i_su_wvx_ops)





















+





















+// 11.13. Vector Single-Width Integer Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vmacc, alu, full_preds, iu_vvxv_ops)





















+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vnmsac, alu, full_preds, iu_vvxv_ops)





















+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vmadd, alu, full_preds, iu_vvxv_ops)





















+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvvv_ops)





















+DEF_RVV_FUNCTION (vnmsub, alu, full_preds, iu_vvxv_ops)





















+





















+// 11.14. Vector Widening Integer Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwvv_ops)





















+DEF_RVV_FUNCTION (vwmaccu, alu, full_preds, u_wwxv_ops)





















+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwvv_ops)





















+DEF_RVV_FUNCTION (vwmacc, alu, full_preds, i_wwxv_ops)





















+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwvv_ops)





















+DEF_RVV_FUNCTION (vwmaccsu, alu, full_preds, i_su_wwxv_ops)





















+DEF_RVV_FUNCTION (vwmaccus, alu, full_preds, i_us_wwxv_ops)





















+





















+// 11.15. Vector Integer Merge Instructions





















+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, all_vvvm_ops)





















+DEF_RVV_FUNCTION (vmerge, no_mask_policy, none_tu_preds, iu_vvxm_ops)





















+





















+// 11.16 Vector Integer Move Instructions





















+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, all_v_ops)





















+DEF_RVV_FUNCTION (vmv_v, move, none_tu_preds, iu_x_ops)





















+





















+/* 12. Vector Fixed-Point Arithmetic Instructions. */





















+





















+// 12.1. Vector Single-Width Saturating Add and Subtract





















+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vsaddu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vsadd, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vssubu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vssub, alu, full_preds, i_vvx_ops)





















+





















+// 12.2. Vector Single-Width Averaging Add and Subtract





















+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vaaddu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vaadd, alu, full_preds, i_vvx_ops)





















+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvv_ops)





















+DEF_RVV_FUNCTION (vasubu, alu, full_preds, u_vvx_ops)





















+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvv_ops)





















+DEF_RVV_FUNCTION (vasub, alu, full_preds, i_vvx_ops)





















+





















+// 12.3. Vector Single-Width Fractional Multiply with Rounding and Saturation





















+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvv_ops)





















+DEF_RVV_FUNCTION (vsmul, alu, full_preds, full_v_i_vvx_ops)





















+





















+// 12.4. Vector Single-Width Scaling Shift Instructions





















+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vssrl, alu, full_preds, u_shift_vvx_ops)





















+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvv_ops)





















+DEF_RVV_FUNCTION (vssra, alu, full_preds, i_shift_vvx_ops)





















+





















+// 12.5. Vector Narrowing Fixed-Point Clip Instructions





















+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwv_ops)





















+DEF_RVV_FUNCTION (vnclipu, narrow_alu, full_preds, u_narrow_shift_vwx_ops)





















+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwv_ops)





















+DEF_RVV_FUNCTION (vnclip, narrow_alu, full_preds, i_narrow_shift_vwx_ops)





















+





















+/* 13. Vector Floating-Point Instructions.  */





















+





















+// 13.2. Vector Single-Width Floating-Point Add/Subtract Instructions





















+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfadd, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsub, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrsub, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfadd_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsub_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrsub_frm, alu_frm, full_preds, f_vvf_ops)





















+





















+// 13.3. Vector Widening Floating-Point Add/Subtract Instructions





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwadd, widen_alu, full_preds, f_wwf_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwsub, widen_alu, full_preds, f_wwf_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwadd_frm, widen_alu_frm, full_preds, f_wwf_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwv_ops)





















+DEF_RVV_FUNCTION (vfwsub_frm, widen_alu_frm, full_preds, f_wwf_ops)





















+





















+// 13.4. Vector Single-Width Floating-Point Multiply/Divide Instructions





















+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmul, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfdiv, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrdiv, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmul_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfdiv_frm, alu_frm, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfrdiv_frm, alu_frm, full_preds, f_vvf_ops)





















+





















+// 13.5. Vector Widening Floating-Point Multiply





















+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwmul, alu, full_preds, f_wvf_ops)





















+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvv_ops)





















+DEF_RVV_FUNCTION (vfwmul_frm, alu_frm, full_preds, f_wvf_ops)





















+





















+// 13.6. Vector Single-Width Floating-Point Fused Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmacc, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsac, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmadd, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsub, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmacc, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsac, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmadd, alu, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsub, alu, full_preds, f_vvfv_ops)





















+





















+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmacc_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmacc_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsac_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsac_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmadd_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmadd_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfmsub_frm, alu_frm, full_preds, f_vvfv_ops)





















+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvvv_ops)





















+DEF_RVV_FUNCTION (vfnmsub_frm, alu_frm, full_preds, f_vvfv_ops)





















+





















+// 13.7. Vector Widening Floating-Point Fused Multiply-Add Instructions





















+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmacc, alu, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc, alu, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmsac, alu, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac, alu, full_preds, f_wwfv_ops)





















+





















+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmacc_frm, alu_frm, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmacc_frm, alu_frm, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwmsac_frm, alu_frm, full_preds, f_wwfv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwvv_ops)





















+DEF_RVV_FUNCTION (vfwnmsac_frm, alu_frm, full_preds, f_wwfv_ops)





















+





















+// 13.8. Vector Floating-Point Square-Root Instruction





















+DEF_RVV_FUNCTION (vfsqrt, alu, full_preds, f_v_ops)





















+





















+DEF_RVV_FUNCTION (vfsqrt_frm, alu_frm, full_preds, f_v_ops)





















+





















+// 13.9. Vector Floating-Point Reciprocal Square-Root Estimate Instruction





















+DEF_RVV_FUNCTION (vfrsqrt7, alu, full_preds, f_v_ops)





















+





















+// 13.10. Vector Floating-Point Reciprocal Estimate Instruction





















+DEF_RVV_FUNCTION (vfrec7, alu, full_preds, f_v_ops)





















+





















+DEF_RVV_FUNCTION (vfrec7_frm, alu_frm, full_preds, f_v_ops)





















+





















+// 13.11. Vector Floating-Point MIN/MAX Instructions





















+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmin, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfmax, alu, full_preds, f_vvf_ops)





















+





















+// 13.12. Vector Floating-Point Sign-Injection Instructions





















+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsgnj, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsgnjn, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvv_ops)





















+DEF_RVV_FUNCTION (vfsgnjx, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfneg, alu, full_preds, f_v_ops)





















+DEF_RVV_FUNCTION (vfabs, alu, full_preds, f_v_ops)





















+





















+// 13.13. Vector Floating-Point Compare Instructions





















+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfeq, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfne, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmflt, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfle, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfgt, return_mask, none_m_mu_preds, f_mvf_ops)





















+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvv_ops)





















+DEF_RVV_FUNCTION (vmfge, return_mask, none_m_mu_preds, f_mvf_ops)





















+





















+// 13.14. Vector Floating-Point Classify Instruction





















+DEF_RVV_FUNCTION (vfclass, alu, full_preds, f_to_u_v_ops)





















+





















+// 13.15. Vector Floating-Point Merge Instruction





















+DEF_RVV_FUNCTION (vfmerge, no_mask_policy, none_tu_preds, f_vvfm_ops)





















+





















+// 13.16. Vector Floating-Point Move Instruction





















+DEF_RVV_FUNCTION (vfmv_v, move, none_tu_preds, f_f_ops)





















+





















+// 13.17. Single-Width Floating-Point/Integer Type-Convert Instructions





















+DEF_RVV_FUNCTION (vfcvt_x, alu, full_preds, f_to_i_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_xu, alu, full_preds, f_to_u_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_rtz_x, alu, full_preds, f_to_i_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_rtz_xu, alu, full_preds, f_to_u_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, i_to_f_x_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f, alu, full_preds, u_to_f_xu_v_ops)





















+





















+DEF_RVV_FUNCTION (vfcvt_x_frm, alu_frm, full_preds, f_to_i_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_xu_frm, alu_frm, full_preds, f_to_u_f_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, i_to_f_x_v_ops)





















+DEF_RVV_FUNCTION (vfcvt_f_frm, alu_frm, full_preds, u_to_f_xu_v_ops)





















+





















+// 13.18. Widening Floating-Point/Integer Type-Convert Instructions





















+DEF_RVV_FUNCTION (vfwcvt_x, alu, full_preds, f_to_wi_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_xu, alu, full_preds, f_to_wu_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_rtz_x, alu, full_preds, f_to_wi_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_rtz_xu, alu, full_preds, f_to_wu_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, i_to_wf_x_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, u_to_wf_xu_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_f, alu, full_preds, f_to_wf_f_v_ops)





















+





















+DEF_RVV_FUNCTION (vfwcvt_x_frm, alu_frm, full_preds, f_to_wi_f_v_ops)





















+DEF_RVV_FUNCTION (vfwcvt_xu_frm, alu_frm, full_preds, f_to_wu_f_v_ops)





















+





















+// 13.19. Narrowing Floating-Point/Integer Type-Convert Instructions





















+DEF_THEAD_RVV_FUNCTION (vfncvt_x, th_vfncvt_x, narrow_alu, full_preds, f_to_ni_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_xu, th_vfncvt_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)





















+DEF_RVV_FUNCTION (vfncvt_rtz_x, narrow_alu, full_preds, f_to_ni_f_w_ops)





















+DEF_RVV_FUNCTION (vfncvt_rtz_xu, narrow_alu, full_preds, f_to_nu_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, i_to_nf_x_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, u_to_nf_xu_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f, th_vfncvt_f, narrow_alu, full_preds, f_to_nf_f_w_ops)





















+DEF_RVV_FUNCTION (vfncvt_rod_f, narrow_alu, full_preds, f_to_nf_f_w_ops)





















+





















+DEF_THEAD_RVV_FUNCTION (vfncvt_x_frm, th_vfncvt_x_frm, narrow_alu_frm, full_preds, f_to_ni_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_xu_frm, th_vfncvt_xu_frm, narrow_alu_frm, full_preds, f_to_nu_f_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, i_to_nf_x_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, u_to_nf_xu_w_ops)





















+DEF_THEAD_RVV_FUNCTION (vfncvt_f_frm, th_vfncvt_f_frm, narrow_alu_frm, full_preds, f_to_nf_f_w_ops)





















+





















+/* 14. Vector Reduction Operations.  */





















+





















+// 14.1. Vector Single-Width Integer Reduction Instructions





















+DEF_RVV_FUNCTION (vredsum, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredmaxu, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredmax, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredminu, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredmin, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredand, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredor, reduc_alu, no_mu_preds, iu_vs_ops)





















+DEF_RVV_FUNCTION (vredxor, reduc_alu, no_mu_preds, iu_vs_ops)





















+





















+// 14.2. Vector Widening Integer Reduction Instructions





















+DEF_RVV_FUNCTION (vwredsum, reduc_alu, no_mu_preds, wi_vs_ops)





















+DEF_RVV_FUNCTION (vwredsumu, reduc_alu, no_mu_preds, wu_vs_ops)





















+





















+// 14.3. Vector Single-Width Floating-Point Reduction Instructions





















+DEF_THEAD_RVV_FUNCTION (vfredusum, th_vfredusum, reduc_alu, no_mu_preds, f_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfredosum, th_vfredosum, reduc_alu, no_mu_preds, f_vs_ops)





















+DEF_RVV_FUNCTION (vfredmax, reduc_alu, no_mu_preds, f_vs_ops)





















+DEF_RVV_FUNCTION (vfredmin, reduc_alu, no_mu_preds, f_vs_ops)





















+





















+DEF_THEAD_RVV_FUNCTION (vfredusum_frm, th_vfredusum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfredosum_frm, th_vfredosum_frm, reduc_alu_frm, no_mu_preds, f_vs_ops)





















+





















+// 14.4. Vector Widening Floating-Point Reduction Instructions





















+DEF_THEAD_RVV_FUNCTION (vfwredosum, th_vfwredosum, reduc_alu, no_mu_preds, wf_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfwredusum, th_vfwredusum, reduc_alu, no_mu_preds, wf_vs_ops)





















+





















+DEF_THEAD_RVV_FUNCTION (vfwredosum_frm, th_vfwredosum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)





















+DEF_THEAD_RVV_FUNCTION (vfwredusum_frm, th_vfwredusum_frm, reduc_alu_frm, no_mu_preds, wf_vs_ops)





















+





















+/* 15. Vector Mask Instructions.  */





















+





















+// 15.1. Vector Mask-Register Logical Instructions





















+DEF_RVV_FUNCTION (vmand, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmnand, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmandn, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmxor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmnor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmorn, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmxnor, mask_alu, none_preds, b_mmm_ops)





















+DEF_RVV_FUNCTION (vmmv, mask_alu, none_preds, b_mm_ops)





















+DEF_RVV_FUNCTION (vmclr, mask_alu, none_preds, b_m_ops)





















+DEF_RVV_FUNCTION (vmset, mask_alu, none_preds, b_m_ops)





















+DEF_RVV_FUNCTION (vmnot, mask_alu, none_preds, b_mm_ops)





















+// 15.2. Vector count population in mask vcpop.m





















+DEF_THEAD_RVV_FUNCTION (vcpop, th_vcpop, mask_alu, none_m_preds, b_ulong_m_ops)





















+// 15.3. vfirst find-first-set mask bit





















+DEF_THEAD_RVV_FUNCTION (vfirst, th_vfirst, mask_alu, none_m_preds, b_long_m_ops)





















+// 15.4. vmsbf.m set-before-first mask bit





















+DEF_RVV_FUNCTION (vmsbf, mask_alu, none_m_mu_preds, b_mm_ops)





















+// 15.5. vmsif.m set-including-first mask bit





















+DEF_RVV_FUNCTION (vmsif, mask_alu, none_m_mu_preds, b_mm_ops)





















+// 15.6. vmsof.m set-only-first mask bit





















+DEF_RVV_FUNCTION (vmsof, mask_alu, none_m_mu_preds, b_mm_ops)





















+// 15.8. Vector Iota Instruction





















+DEF_RVV_FUNCTION (viota, mask_alu, full_preds, u_vm_ops)





















+// 15.9. Vector Element Index Instruction





















+DEF_RVV_FUNCTION (vid, alu, full_preds, u_v_ops)





















+





















+/* 16. Vector Permutation Instructions.  */





















+





















+// 16.1. Integer Scalar Move Instructions





















+DEF_RVV_FUNCTION (vmv_x, scalar_move, none_preds, iu_x_s_ops)





















+DEF_RVV_FUNCTION (vmv_s, move, none_tu_preds, iu_s_x_ops)





















+





















+// 16.2. Floating-Point Scalar Move Instructions





















+DEF_RVV_FUNCTION (vfmv_f, scalar_move, none_preds, f_f_s_ops)





















+DEF_RVV_FUNCTION (vfmv_s, move, none_tu_preds, f_s_f_ops)





















+





















+// 16.3. Vector Slide Instructions





















+DEF_RVV_FUNCTION (vslideup, alu, full_preds, all_vvvx_ops)





















+DEF_RVV_FUNCTION (vslidedown, alu, full_preds, all_vvx_ops)





















+DEF_RVV_FUNCTION (vslide1up, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vslide1down, alu, full_preds, iu_vvx_ops)





















+DEF_RVV_FUNCTION (vfslide1up, alu, full_preds, f_vvf_ops)





















+DEF_RVV_FUNCTION (vfslide1down, alu, full_preds, f_vvf_ops)





















+





















+// 16.4. Vector Register Gather Instructions





















+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvv_ops)





















+DEF_RVV_FUNCTION (vrgather, alu, full_preds, all_gather_vvx_ops)





















+DEF_RVV_FUNCTION (vrgatherei16, alu, full_preds, all_gatherei16_vvv_ops)





















+





















+// 16.5. Vector Compress Instruction





















+DEF_RVV_FUNCTION (vcompress, alu, none_tu_preds, all_vvm_ops)





















+





















+/* Miscellaneous Vector Functions.  */





















+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_u_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_i_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_i_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, f_v_u_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, i_v_f_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, u_v_f_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew8_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew16_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew32_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_eew64_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool2_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool4_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool8_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool16_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool32_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, iu_v_bool64_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew8_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew16_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew32_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_signed_eew64_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew8_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew16_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew32_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vreinterpret, misc, none_preds, b_v_unsigned_eew64_lmul1_interpret_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x2_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x4_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x8_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x16_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x32_ops)





















+DEF_RVV_FUNCTION (vlmul_ext, misc, none_preds, all_v_vlmul_ext_x64_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x2_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x4_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x8_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x16_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x32_ops)





















+DEF_RVV_FUNCTION (vlmul_trunc, misc, none_preds, all_v_vlmul_trunc_x64_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x2_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x4_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul1_x8_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x2_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul2_x4_ops)





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_lmul4_x2_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x2_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x4_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul1_x8_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x2_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul2_x4_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_lmul4_x2_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x2_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x4_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul1_x8_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x2_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul2_x4_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_lmul4_x2_ops)





















+





















+// Tuple types





















+DEF_RVV_FUNCTION (vset, vset, none_preds, all_v_vset_tuple_ops)





















+DEF_RVV_FUNCTION (vget, vget, none_preds, all_v_vget_tuple_ops)





















+DEF_RVV_FUNCTION (vcreate, vcreate, none_preds, all_v_vcreate_tuple_ops)





















+DEF_RVV_FUNCTION (vundefined, vundefined, none_preds, all_none_void_tuple_ops)





















+DEF_THEAD_RVV_FUNCTION (vlseg, th_vlseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vsseg, th_vsseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ops)





















+DEF_THEAD_RVV_FUNCTION (vlsseg, th_vlsseg, seg_loadstore, full_preds, tuple_v_scalar_const_ptr_ptrdiff_ops)





















+DEF_THEAD_RVV_FUNCTION (vssseg, th_vssseg, seg_loadstore, none_m_preds, tuple_v_scalar_ptr_ptrdiff_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vluxseg, th_vluxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vloxseg, th_vloxseg, seg_indexed_loadstore, full_preds, tuple_v_scalar_const_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsuxseg, th_vsuxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew8_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew16_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew32_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vsoxseg, th_vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)





















+DEF_THEAD_RVV_FUNCTION (vlsegff, th_vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)





















+#undef REQUIRED_EXTENSIONS





















+





















+#undef DEF_RVV_FUNCTION





















+#undef DEF_THEAD_RVV_FUNCTION





















\ No newline at end of file





















diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc





















new file mode 100644





















index 00000000000..9d84ed39937





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector-builtins.cc





















@@ -0,0 +1,746 @@





















+/* function_base implementation for RISC-V XTheadVector Extension





















+   for GNU compiler.





















+   Copyright (C) 2022-2023 Free Software Foundation, Inc.





















+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head





















+   Semiconductor Co., Ltd.





















+





















+   This file is part of GCC.





















+





















+   GCC is free software; you can redistribute it and/or modify it





















+   under the terms of the GNU General Public License as published by





















+   the Free Software Foundation; either version 3, or (at your option)





















+   any later version.





















+





















+   GCC is distributed in the hope that it will be useful, but





















+   WITHOUT ANY WARRANTY; without even the implied warranty of





















+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU





















+   General Public License for more details.





















+





















+   You should have received a copy of the GNU General Public License





















+   along with GCC; see the file COPYING3.  If not see





















+   <http://www.gnu.org/licenses/>.  */





















+





















+#include "config.h"





















+#include "system.h"





















+#include "coretypes.h"





















+#include "tm.h"





















+#include "tree.h"





















+#include "rtl.h"





















+#include "tm_p.h"





















+#include "memmodel.h"





















+#include "insn-codes.h"





















+#include "optabs.h"





















+#include "recog.h"





















+#include "expr.h"





















+#include "basic-block.h"





















+#include "function.h"





















+#include "fold-const.h"





















+#include "gimple.h"





















+#include "gimple-iterator.h"





















+#include "gimplify.h"





















+#include "explow.h"





















+#include "emit-rtl.h"





















+#include "tree-vector-builder.h"





















+#include "rtx-vector-builder.h"





















+#include "riscv-vector-builtins.h"





















+#include "riscv-vector-builtins-shapes.h"





















+#include "riscv-vector-builtins-bases.h"





















+#include "thead-vector-builtins.h"





















+





















+using namespace riscv_vector;





















+





















+namespace riscv_vector {





















+





















+/* Implements vsetvl<mode> && vsetvlmax<mode>.  */





















+template<bool VLMAX_P>





















+class th_vsetvl : public function_base





















+{





















+public:





















+  bool apply_vl_p () const override





















+  {





















+    return false;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    if (VLMAX_P)





















+      e.add_input_operand (Pmode, gen_rtx_REG (Pmode, 0));





















+    else





















+      e.add_input_operand (0);





















+





















+    tree type = builtin_types[e.type.index].vector;





















+    machine_mode mode = TYPE_MODE (type);





















+





















+    machine_mode inner_mode = GET_MODE_INNER (mode);





















+    /* SEW.  */





















+    e.add_input_operand (Pmode,





















+      gen_int_mode (GET_MODE_BITSIZE (inner_mode), Pmode));





















+





















+    /* LMUL.  */





















+    e.add_input_operand (Pmode,





















+      gen_int_mode (get_vlmul (mode), Pmode));





















+





















+    /* TAIL_ANY.  */





















+    e.add_input_operand (Pmode,





















+ gen_int_mode (get_prefer_tail_policy (), Pmode));





















+





















+    /* MASK_ANY.  */





















+    e.add_input_operand (Pmode,





















+ gen_int_mode (get_prefer_mask_policy (), Pmode));





















+    return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));





















+  }





















+};





















+





















+/* Implements





















+ * vle.v/vse.v/vlm.v/vsm.v/vlse.v/vsse.v/vluxei.v/vloxei.v/vsuxei.v/vsoxei.v





















+ * codegen.  */





















+template<bool STORE_P, lst_type LST_TYPE, bool ORDERED_P>





















+class th_loadstore : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return !STORE_P; }





















+  bool apply_mask_policy_p () const override { return !STORE_P; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    if (STORE_P)





















+      return CP_WRITE_MEMORY;





















+    else





















+      return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    if (STORE_P || LST_TYPE == LST_INDEXED)





















+      return true;





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    if (LST_TYPE == LST_INDEXED)





















+      {





















+ int unspec = ORDERED_P ? UNSPEC_ORDERED : UNSPEC_UNORDERED;





















+ if (STORE_P)





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_store (unspec, e.vector_mode (),





















+       e.index_mode ()));





















+ else





















+   {





















+     unsigned src_eew_bitsize





















+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.index_mode ()));





















+     unsigned dst_eew_bitsize





















+       = GET_MODE_BITSIZE (GET_MODE_INNER (e.vector_mode ()));





















+     if (dst_eew_bitsize == src_eew_bitsize)





















+       {





















+ return e.use_exact_insn (





















+   code_for_pred_th_indexed_load_same_eew (





















+     unspec, e.vector_mode ()));





















+       }





















+     else if (dst_eew_bitsize > src_eew_bitsize)





















+       {





















+ unsigned factor = dst_eew_bitsize / src_eew_bitsize;





















+ switch (factor)





















+   {





















+   case 2:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x2_greater_eew (





















+ unspec, e.vector_mode ()));





















+   case 4:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x4_greater_eew (





















+ unspec, e.vector_mode ()));





















+   case 8:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x8_greater_eew (





















+ unspec, e.vector_mode ()));





















+   default:





















+     gcc_unreachable ();





















+   }





















+       }





















+     else





















+       {





















+ unsigned factor = src_eew_bitsize / dst_eew_bitsize;





















+ switch (factor)





















+   {





















+   case 2:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x2_smaller_eew (





















+ unspec, e.vector_mode ()));





















+   case 4:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x4_smaller_eew (





















+ unspec, e.vector_mode ()));





















+   case 8:





















+     return e.use_exact_insn (





















+       code_for_pred_th_indexed_load_x8_smaller_eew (





















+ unspec, e.vector_mode ()));





















+   default:





















+     gcc_unreachable ();





















+   }





















+       }





















+   }





















+      }





















+    else if (LST_TYPE == LST_STRIDED)





















+      {





















+ if (STORE_P)





















+   return e.use_contiguous_store_insn (





















+     code_for_pred_th_strided_store (e.vector_mode ()));





















+ else





















+   return e.use_contiguous_load_insn (





















+     code_for_pred_th_strided_load (e.vector_mode ()));





















+      }





















+    else





















+      {





















+ if (STORE_P)





















+   return e.use_contiguous_store_insn (





















+     code_for_pred_th_store (e.vector_mode ()));





















+ else





















+   return e.use_contiguous_load_insn (





















+     code_for_pred_mov (e.vector_mode ()));





















+      }





















+  }





















+};





















+





















+/* Implements vneg/vnot.  */





















+template<rtx_code CODE, enum frm_op_type FRM_OP = NO_FRM>





















+class th_unop : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (code_for_pred_th (CODE, e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vnsrl/vnsra.  */





















+template<rtx_code CODE>





















+class th_vnshift : public function_base





















+{





















+public:





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_wx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow_scalar (CODE, e.vector_mode ()));





















+      case OP_TYPE_wv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow (CODE, e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vncvt.  */





















+class th_vncvt_x : public function_base





















+{





















+public:





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_trunc (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vnclip/vnclipu.  */





















+template<int UNSPEC>





















+class th_vnclip : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override { return true; }





















+





















+  bool may_require_vxrm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_wx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow_clip_scalar (UNSPEC, e.vector_mode ()));





















+      case OP_TYPE_wv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_narrow_clip (UNSPEC, e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vcpop.  */





















+class th_vcpop : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_popcount (e.vector_mode (), Pmode));





















+  }





















+};





















+





















+/* Implements vfirst.  */





















+class th_vfirst : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_ffs (e.vector_mode (), Pmode));





















+  }





















+};





















+





















+/* Implements vmadc.  */





















+class th_vmadc : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool use_mask_predication_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_vvm:





















+ return e.use_exact_insn (code_for_pred_th_madc (e.vector_mode ()));





















+      case OP_TYPE_vxm:





















+ return e.use_exact_insn (code_for_pred_th_madc_scalar (e.vector_mode ()));





















+      case OP_TYPE_vv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_madc_overflow (e.vector_mode ()));





















+      case OP_TYPE_vx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_madc_overflow_scalar (e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vmsbc.  */





















+class th_vmsbc : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+  bool use_mask_predication_p () const override { return false; }





















+  bool has_merge_operand_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    switch (e.op_info->op)





















+      {





















+      case OP_TYPE_vvm:





















+ return e.use_exact_insn (code_for_pred_th_msbc (e.vector_mode ()));





















+      case OP_TYPE_vxm:





















+ return e.use_exact_insn (code_for_pred_th_msbc_scalar (e.vector_mode ()));





















+      case OP_TYPE_vv:





















+ return e.use_exact_insn (





















+   code_for_pred_th_msbc_overflow (e.vector_mode ()));





















+      case OP_TYPE_vx:





















+ return e.use_exact_insn (





















+   code_for_pred_th_msbc_overflow_scalar (e.vector_mode ()));





















+      default:





















+ gcc_unreachable ();





















+      }





















+  }





















+};





















+





















+/* Implements vfncvt.x.  */





















+template<int UNSPEC, enum frm_op_type FRM_OP = NO_FRM>





















+class th_vfncvt_x : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_narrow_fcvt_x_f (UNSPEC, e.arg_mode (0)));





















+  }





















+};





















+





















+template<enum frm_op_type FRM_OP = NO_FRM>





















+class th_vfncvt_f : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    if (e.op_info->op == OP_TYPE_f_w)





















+      return e.use_exact_insn (





















+ code_for_pred_th_trunc (e.vector_mode ()));





















+    if (e.op_info->op == OP_TYPE_x_w)





















+      return e.use_exact_insn (





















+ code_for_pred_th_narrow (FLOAT, e.arg_mode (0)));





















+    if (e.op_info->op == OP_TYPE_xu_w)





















+      return e.use_exact_insn (





















+ code_for_pred_th_narrow (UNSIGNED_FLOAT, e.arg_mode (0)));





















+    gcc_unreachable ();





















+  }





















+};





















+





















+/* Implements floating-point reduction instructions.  */





















+template<unsigned UNSPEC, enum frm_op_type FRM_OP = NO_FRM>





















+class th_freducop : public function_base





















+{





















+public:





















+  bool has_rounding_mode_operand_p () const override





















+  {





















+    return FRM_OP == HAS_FRM;





















+  }





















+





















+  bool may_require_frm_p () const override { return true; }





















+





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (code_for_pred_th (UNSPEC, e.vector_mode ()));





















+  }





















+};





















+





















+class th_vleff : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY | CP_WRITE_CSR;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  gimple *fold (gimple_folder &f) const override





















+  {





















+    return fold_fault_load (f);





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_contiguous_load_insn (





















+      code_for_pred_th_fault_load (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vlseg.v.  */





















+class th_vlseg : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_unit_strided_load (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vsseg.v.  */





















+class th_vsseg : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_WRITE_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_unit_strided_store (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vlsseg.v.  */





















+class th_vlsseg : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_strided_load (e.vector_mode ()));





















+  }





















+};





















+





















+/* Implements vssseg.v.  */





















+class th_vssseg : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_WRITE_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_strided_store (e.vector_mode ()));





















+  }





















+};





















+





















+template<int UNSPEC>





















+class th_seg_indexed_load : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_indexed_load (





















+ UNSPEC, e.vector_mode (), e.index_mode ()));





















+  }





















+};





















+





















+template<int UNSPEC>





















+class th_seg_indexed_store : public function_base





















+{





















+public:





















+  bool apply_tail_policy_p () const override { return false; }





















+  bool apply_mask_policy_p () const override { return false; }





















+





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_WRITE_MEMORY;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index) const override





















+  {





















+    return true;





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_indexed_store (





















+ UNSPEC, e.vector_mode (), e.index_mode ()));





















+  }





















+};





















+





















+/* Implements vlsegff.v.  */





















+class th_vlsegff : public function_base





















+{





















+public:





















+  unsigned int call_properties (const function_instance &) const override





















+  {





















+    return CP_READ_MEMORY | CP_WRITE_CSR;





















+  }





















+





















+  bool can_be_overloaded_p (enum predication_type_index pred) const override





















+  {





















+    return pred != PRED_TYPE_none;





















+  }





















+





















+  gimple *fold (gimple_folder &f) const override





















+  {





















+    return fold_fault_load (f);





















+  }





















+





















+  rtx expand (function_expander &e) const override





















+  {





















+    return e.use_exact_insn (





















+      code_for_pred_th_fault_load (e.vector_mode ()));





















+  }





















+};





















+





















+static CONSTEXPR const th_vsetvl<false> th_vsetvl_obj;





















+static CONSTEXPR const th_vsetvl<true> th_vsetvlmax_obj;





















+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vle_obj;





















+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vse_obj;





















+static CONSTEXPR const th_loadstore<false, LST_UNIT_STRIDE, false> th_vlm_obj;





















+static CONSTEXPR const th_loadstore<true, LST_UNIT_STRIDE, false> th_vsm_obj;





















+static CONSTEXPR const th_loadstore<false, LST_STRIDED, false> th_vlse_obj;





















+static CONSTEXPR const th_loadstore<true, LST_STRIDED, false> th_vsse_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei8_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei16_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei32_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, false> th_vluxei64_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei8_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei16_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei32_obj;





















+static CONSTEXPR const th_loadstore<false, LST_INDEXED, true> th_vloxei64_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei8_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei16_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei32_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, false> th_vsuxei64_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei8_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei16_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei32_obj;





















+static CONSTEXPR const th_loadstore<true, LST_INDEXED, true> th_vsoxei64_obj;





















+static CONSTEXPR const th_unop<NEG> th_vneg_obj;





















+static CONSTEXPR const th_unop<NOT> th_vnot_obj;





















+static CONSTEXPR const th_vnshift<LSHIFTRT> th_vnsrl_obj;





















+static CONSTEXPR const th_vnshift<ASHIFTRT> th_vnsra_obj;





















+static CONSTEXPR const th_vncvt_x th_vncvt_x_obj;





















+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIP> th_vnclip_obj;





















+static CONSTEXPR const th_vnclip<UNSPEC_VNCLIPU> th_vnclipu_obj;





















+static CONSTEXPR const th_vcpop th_vcpop_obj;





















+static CONSTEXPR const th_vfirst th_vfirst_obj;





















+static CONSTEXPR const th_vmadc th_vmadc_obj;





















+static CONSTEXPR const th_vmsbc th_vmsbc_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT> th_vfncvt_x_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_VFCVT, HAS_FRM> th_vfncvt_x_frm_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT> th_vfncvt_xu_obj;





















+static CONSTEXPR const th_vfncvt_x<UNSPEC_UNSIGNED_VFCVT, HAS_FRM> th_vfncvt_xu_frm_obj;





















+static CONSTEXPR const th_vfncvt_f<NO_FRM> th_vfncvt_f_obj;





















+static CONSTEXPR const th_vfncvt_f<HAS_FRM> th_vfncvt_f_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED> th_vfredusum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_UNORDERED, HAS_FRM> th_vfredusum_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED> th_vfredosum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_REDUC_SUM_ORDERED, HAS_FRM> th_vfredosum_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED> th_vfwredusum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_UNORDERED, HAS_FRM> th_vfwredusum_frm_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED> th_vfwredosum_obj;





















+static CONSTEXPR const th_freducop<UNSPEC_WREDUC_SUM_ORDERED, HAS_FRM> th_vfwredosum_frm_obj;





















+static CONSTEXPR const th_vleff th_vleff_obj;





















+static CONSTEXPR const th_vlseg th_vlseg_obj;





















+static CONSTEXPR const th_vsseg th_vsseg_obj;





















+static CONSTEXPR const th_vlsseg th_vlsseg_obj;





















+static CONSTEXPR const th_vssseg th_vssseg_obj;





















+static CONSTEXPR const th_seg_indexed_load<UNSPEC_UNORDERED> th_vluxseg_obj;





















+static CONSTEXPR const th_seg_indexed_load<UNSPEC_ORDERED> th_vloxseg_obj;





















+static CONSTEXPR const th_seg_indexed_store<UNSPEC_UNORDERED> th_vsuxseg_obj;





















+static CONSTEXPR const th_seg_indexed_store<UNSPEC_ORDERED> th_vsoxseg_obj;





















+static CONSTEXPR const th_vlsegff th_vlsegff_obj;





















+





















+/* Declare the function base NAME, pointing it to an instance





















+   of class <NAME>_obj.  */





















+#define BASE(NAME) \





















+  namespace bases { const function_base *const NAME = &NAME##_obj; }





















+





















+BASE (th_vsetvl)





















+BASE (th_vsetvlmax)





















+BASE (th_vle)





















+BASE (th_vse)





















+BASE (th_vlm)





















+BASE (th_vsm)





















+BASE (th_vlse)





















+BASE (th_vsse)





















+BASE (th_vluxei8)





















+BASE (th_vluxei16)





















+BASE (th_vluxei32)





















+BASE (th_vluxei64)





















+BASE (th_vloxei8)





















+BASE (th_vloxei16)





















+BASE (th_vloxei32)





















+BASE (th_vloxei64)





















+BASE (th_vsuxei8)





















+BASE (th_vsuxei16)





















+BASE (th_vsuxei32)





















+BASE (th_vsuxei64)





















+BASE (th_vsoxei8)





















+BASE (th_vsoxei16)





















+BASE (th_vsoxei32)





















+BASE (th_vsoxei64)





















+BASE (th_vneg)





















+BASE (th_vnot)





















+BASE (th_vnsrl)





















+BASE (th_vnsra)





















+BASE (th_vncvt_x)





















+BASE (th_vnclip)





















+BASE (th_vnclipu)





















+BASE (th_vcpop)





















+BASE (th_vfirst)





















+BASE (th_vmadc)





















+BASE (th_vmsbc)





















+BASE (th_vfncvt_x)





















+BASE (th_vfncvt_x_frm)





















+BASE (th_vfncvt_xu)





















+BASE (th_vfncvt_xu_frm)





















+BASE (th_vfncvt_f)





















+BASE (th_vfncvt_f_frm)





















+BASE (th_vfredusum)





















+BASE (th_vfredusum_frm)





















+BASE (th_vfredosum)





















+BASE (th_vfredosum_frm)





















+BASE (th_vfwredusum)





















+BASE (th_vfwredusum_frm)





















+BASE (th_vfwredosum)





















+BASE (th_vfwredosum_frm)





















+BASE (th_vleff)





















+BASE (th_vlseg)





















+BASE (th_vsseg)





















+BASE (th_vlsseg)





















+BASE (th_vssseg)





















+BASE (th_vluxseg)





















+BASE (th_vloxseg)





















+BASE (th_vsuxseg)





















+BASE (th_vsoxseg)





















+BASE (th_vlsegff)





















+





















+} // end namespace riscv_vector





















diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h





















new file mode 100644





















index 00000000000..d0bf00b8e81





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector-builtins.h





















@@ -0,0 +1,92 @@





















+/* function_base declaration for RISC-V XTheadVector Extension





















+   for GNU compiler.





















+   Copyright (C) 2022-2023 Free Software Foundation, Inc.





















+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head





















+   Semiconductor Co., Ltd.





















+





















+   This file is part of GCC.





















+





















+   GCC is free software; you can redistribute it and/or modify it





















+   under the terms of the GNU General Public License as published by





















+   the Free Software Foundation; either version 3, or (at your option)





















+   any later version.





















+





















+   GCC is distributed in the hope that it will be useful, but





















+   WITHOUT ANY WARRANTY; without even the implied warranty of





















+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU





















+   General Public License for more details.





















+





















+   You should have received a copy of the GNU General Public License





















+   along with GCC; see the file COPYING3.  If not see





















+   <http://www.gnu.org/licenses/>.  */





















+





















+#ifndef GCC_THEAD_VECTOR_BUILTINS_H





















+#define GCC_THEAD_VECTOR_BUILTINS_H





















+





















+namespace riscv_vector {





















+





















+namespace bases {





















+extern const function_base *const th_vsetvl;





















+extern const function_base *const th_vsetvlmax;





















+extern const function_base *const th_vle;





















+extern const function_base *const th_vse;





















+extern const function_base *const th_vlm;





















+extern const function_base *const th_vsm;





















+extern const function_base *const th_vlse;





















+extern const function_base *const th_vsse;





















+extern const function_base *const th_vluxei8;





















+extern const function_base *const th_vluxei16;





















+extern const function_base *const th_vluxei32;





















+extern const function_base *const th_vluxei64;





















+extern const function_base *const th_vloxei8;





















+extern const function_base *const th_vloxei16;





















+extern const function_base *const th_vloxei32;





















+extern const function_base *const th_vloxei64;





















+extern const function_base *const th_vsuxei8;





















+extern const function_base *const th_vsuxei16;





















+extern const function_base *const th_vsuxei32;





















+extern const function_base *const th_vsuxei64;





















+extern const function_base *const th_vsoxei8;





















+extern const function_base *const th_vsoxei16;





















+extern const function_base *const th_vsoxei32;





















+extern const function_base *const th_vsoxei64;





















+extern const function_base *const th_vneg;





















+extern const function_base *const th_vnot;





















+extern const function_base *const th_vnsrl;





















+extern const function_base *const th_vnsra;





















+extern const function_base *const th_vncvt_x;





















+extern const function_base *const th_vnclip;





















+extern const function_base *const th_vnclipu;





















+extern const function_base *const th_vcpop;





















+extern const function_base *const th_vfirst;





















+extern const function_base *const th_vmadc;





















+extern const function_base *const th_vmsbc;





















+extern const function_base *const th_vfncvt_x;





















+extern const function_base *const th_vfncvt_x_frm;





















+extern const function_base *const th_vfncvt_xu;





















+extern const function_base *const th_vfncvt_xu_frm;





















+extern const function_base *const th_vfncvt_f;





















+extern const function_base *const th_vfncvt_f_frm;





















+extern const function_base *const th_vfredusum;





















+extern const function_base *const th_vfredusum_frm;





















+extern const function_base *const th_vfredosum;





















+extern const function_base *const th_vfredosum_frm;





















+extern const function_base *const th_vfwredusum;





















+extern const function_base *const th_vfwredusum_frm;





















+extern const function_base *const th_vfwredosum;





















+extern const function_base *const th_vfwredosum_frm;





















+extern const function_base *const th_vleff;





















+extern const function_base *const th_vlseg;





















+extern const function_base *const th_vsseg;





















+extern const function_base *const th_vlsseg;





















+extern const function_base *const th_vssseg;





















+extern const function_base *const th_vluxseg;





















+extern const function_base *const th_vloxseg;





















+extern const function_base *const th_vsuxseg;





















+extern const function_base *const th_vsoxseg;





















+extern const function_base *const th_vlsegff;





















+}





















+





















+} // end namespace riscv_vector





















+





















+#endif





















diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md





















new file mode 100644





















index 00000000000..072fb5e68e1





















--- /dev/null





















+++ b/gcc/config/riscv/thead-vector.md





















@@ -0,0 +1,2574 @@





















+(define_c_enum "unspec" [





















+  UNSPEC_TH_VWLDST





















+])





















+





















+(define_int_attr th_order [





















+  (UNSPEC_ORDERED "") (UNSPEC_UNORDERED "u")





















+])





















+





















+(define_int_attr th_reduc_op [





















+  (UNSPEC_REDUC_SUM "redsum")





















+  (UNSPEC_REDUC_SUM_ORDERED "redosum") (UNSPEC_REDUC_SUM_UNORDERED "redsum")





















+  (UNSPEC_REDUC_MAXU "redmaxu") (UNSPEC_REDUC_MAX "redmax") (UNSPEC_REDUC_MINU "redminu") (UNSPEC_REDUC_MIN "redmin")





















+  (UNSPEC_REDUC_AND "redand") (UNSPEC_REDUC_OR "redor") (UNSPEC_REDUC_XOR "redxor")





















+  (UNSPEC_WREDUC_SUM "wredsum") (UNSPEC_WREDUC_SUMU "wredsumu")





















+  (UNSPEC_WREDUC_SUM_ORDERED "wredosum") (UNSPEC_WREDUC_SUM_UNORDERED "wredsum")





















+])





















+





















+(define_code_iterator neg_unop [neg])





















+(define_code_iterator not_unop [not])





















+





















+(define_code_iterator any_float_unop_neg [neg])





















+(define_code_iterator any_float_unop_abs [abs])





















+





















+(define_mode_iterator V_VLS_VT [V VLS VT])





















+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])





















+





















+(define_split





















+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")





















+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]





















+  "TARGET_XTHEADVECTOR"





















+  [(const_int 0)]





















+  {





















+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],





















+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));





















+    DONE;





















+  })





















+





















+(define_insn_and_split "@pred_th_whole_mov<mode>"





















+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")





















+ (unspec:V_VLS_VT





















+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")





















+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")





















+    (match_operand 3 "const_1_operand"         "  i, i, i")





















+    (reg:SI VL_REGNUM)





















+    (reg:SI VTYPE_REGNUM)]





















+ UNSPEC_TH_VWLDST))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vmv.v.v\t%0,%1





















+   vle.v\t%0,%1





















+   vse.v\t%1,%0"





















+  "&& REG_P (operands[0]) && REG_P (operands[1])





















+   && REGNO (operands[0]) == REGNO (operands[1])"





















+  [(const_int 0)]





















+  ""





















+  [(set_attr "type" "vimov,vlds,vlds")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))





















+   (set (attr "avl_type_idx") (const_int 3))





















+   (set_attr "vl_op_idx" "2")])





















+





















+(define_insn_and_split "@pred_th_whole_mov<mode>"





















+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")





















+ (unspec:VB





















+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")





















+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")





















+    (match_operand 3 "const_1_operand"         "  i, i, i")





















+    (reg:SI VL_REGNUM)





















+    (reg:SI VTYPE_REGNUM)]





















+ UNSPEC_TH_VWLDST))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vmv.v.v\t%0,%1





















+   vle.v\t%0,%1





















+   vse.v\t%1,%0"





















+  "&& REG_P (operands[0]) && REG_P (operands[1])





















+   && REGNO (operands[0]) == REGNO (operands[1])"





















+  [(const_int 0)]





















+  ""





















+  [(set_attr "type" "vimov,vlds,vlds")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))





















+   (set (attr "avl_type_idx") (const_int 3))





















+   (set_attr "vl_op_idx" "2")





















+   (set (attr "sew") (const_int 8))





















+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])





















+





















+(define_expand "@pred_th_mov<mode>"





















+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")





















+    (if_then_else:V_VLS





















+      (unspec:<VM>





















+        [(match_operand:<VM> 1 "vector_mask_operand")





















+         (match_operand 4 "vector_length_operand")





















+         (match_operand 5 "const_int_operand")





















+         (match_operand 6 "const_int_operand")





















+         (match_operand 7 "const_int_operand")





















+         (reg:SI VL_REGNUM)





















+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+      (match_operand:V_VLS 3 "vector_move_operand")





















+      (match_operand:V_VLS 2 "vector_merge_operand")))]





















+  "TARGET_XTHEADVECTOR"





















+  {})





















+





















+(define_insn_and_split "*pred_broadcast<mode>"





















+  [(set (match_operand:V_VLSI 0 "register_operand"                 "=vr, vr, vd, vd, vr, vr, vr, vr")





















+ (if_then_else:V_VLSI





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")





















+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (vec_duplicate:V_VLSI





















+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " r,  r,Wdm,Wdm,Wdm,Wdm,  r,  r"))





















+   (match_operand:V_VLSI 2 "vector_merge_operand"            "vu,  0, vu,  0, vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vmv.v.x\t%0,%3





















+   vmv.v.x\t%0,%3





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero





















+   vlse.v\t%0,%3,zero





















+   vmv.s.x\t%0,%3





















+   vmv.s.x\t%0,%3"





















+  "(register_operand (operands[3], <VEL>mode)





















+  || CONST_POLY_INT_P (operands[3]))





















+  && GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"





















+  [(set (match_dup 0)





















+ (if_then_else:V_VLSI (unspec:<VM> [(match_dup 1) (match_dup 4)





















+      (match_dup 5) (match_dup 6) (match_dup 7)





















+      (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (vec_duplicate:V_VLSI (match_dup 3))





















+   (match_dup 2)))]





















+  {





















+    gcc_assert (can_create_pseudo_p ());





















+    if (CONST_POLY_INT_P (operands[3]))





















+      {





















+ rtx tmp = gen_reg_rtx (<VEL>mode);





















+ emit_move_insn (tmp, operands[3]);





















+ operands[3] = tmp;





















+      }





















+    rtx m = assign_stack_local (<VEL>mode, GET_MODE_SIZE (<VEL>mode),





















+ GET_MODE_ALIGNMENT (<VEL>mode));





















+    m = validize_mem (m);





















+    emit_move_insn (m, operands[3]);





















+    m = gen_rtx_MEM (<VEL>mode, force_reg (Pmode, XEXP (m, 0)));





















+    operands[3] = m;





















+





















+    /* For SEW = 64 in RV32 system, we expand vmv.s.x:





















+       andi a2,a2,1





















+       vsetvl zero,a2,e64





















+       vlse64.v  */





















+    if (satisfies_constraint_Wb1 (operands[1]))





















+      {





















+ operands[4] = riscv_vector::gen_avl_for_scalar_move (operands[4]);





















+ operands[1] = CONSTM1_RTX (<VM>mode);





















+      }





















+  }





















+  [(set_attr "type" "vimov,vimov,vlds,vlds,vlds,vlds,vimovxv,vimovxv")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_broadcast<mode>"





















+  [(set (match_operand:V_VLSF_ZVFHMIN 0 "register_operand"         "=vr, vr, vr, vr, vr, vr, vr, vr")





















+ (if_then_else:V_VLSF_ZVFHMIN





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_broadcast_mask_operand" "Wc1,Wc1, vm, vm,Wc1,Wc1,Wb1,Wb1")





















+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK, rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (vec_duplicate:V_VLSF_ZVFHMIN





















+     (match_operand:<VEL> 3 "direct_broadcast_operand"       " f,  f,Wdm,Wdm,Wdm,Wdm,  f,  f"))





















+   (match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand"    "vu,  0, vu,  0, vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   vfmv.v.f\t%0,%3





















+   vfmv.v.f\t%0,%3





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero,%1.t





















+   vlse.v\t%0,%3,zero





















+   vlse.v\t%0,%3,zero





















+   vfmv.s.f\t%0,%3





















+   vfmv.s.f\t%0,%3"





















+  [(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; vle.v/vse.v,vmv.v.v





















+(define_insn_and_split "*pred_th_mov<mode>"





















+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"            "=vr,    vr,    vd,     m,    vr,    vr")





















+    (if_then_else:V_VLS





















+      (unspec:<VM>





















+        [(match_operand:<VM> 1 "vector_mask_operand"           "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")





















+         (match_operand 4 "vector_length_operand"              "   rK,    rK,    rK,    rK,    rK,    rK")





















+         (match_operand 5 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")





















+         (match_operand 6 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")





















+         (match_operand 7 "const_int_operand"                  "    i,     i,     i,     i,     i,     i")





















+         (reg:SI VL_REGNUM)





















+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+      (match_operand:V_VLS 3 "reg_or_mem_operand"              "    m,     m,     m,    vr,    vr,    vr")





















+      (match_operand:V_VLS 2 "vector_merge_operand"            "    0,    vu,    vu,    vu,    vu,     0")))]





















+  "(TARGET_XTHEADVECTOR





















+    && (register_operand (operands[0], <MODE>mode)





















+        || register_operand (operands[3], <MODE>mode)))"





















+  "@





















+   vle.v\t%0,%3%p1





















+   vle.v\t%0,%3





















+   vle.v\t%0,%3,%1.t





















+   vse.v\t%3,%0%p1





















+   vmv.v.v\t%0,%3





















+   vmv.v.v\t%0,%3"





















+  "&& register_operand (operands[0], <MODE>mode)





















+   && register_operand (operands[3], <MODE>mode)





















+   && satisfies_constraint_vu (operands[2])





















+   && INTVAL (operands[7]) == riscv_vector::VLMAX"





















+  [(set (match_dup 0) (match_dup 3))]





















+  ""





















+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn_and_split "@pred_th_mov<mode>"





















+  [(set (match_operand:VB_VLS 0 "nonimmediate_operand"               "=vr,   m,  vr,  vr,  vr")





















+ (if_then_else:VB_VLS





















+   (unspec:VB_VLS





















+     [(match_operand:VB_VLS 1 "vector_all_trues_mask_operand" "Wc1, Wc1, Wc1, Wc1, Wc1")





















+      (match_operand 4 "vector_length_operand"            " rK,  rK,  rK,  rK,  rK")





















+      (match_operand 5 "const_int_operand"                "  i,   i,   i,   i,   i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operand:VB_VLS 3 "vector_move_operand"              "  m,  vr,  vr, Wc0, Wc1")





















+   (match_operand:VB_VLS 2 "vector_undef_operand"             " vu,  vu,  vu,  vu,  vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+   #





















+   #





















+   vmcpy.m\t%0,%3





















+   vmclr.m\t%0





















+   vmset.m\t%0"





















+  "&& !reload_completed"





















+  [(const_int 0)]





















+  {





















+    if ((MEM_P (operands[0]) || MEM_P (operands[3]))





















+        || (REG_P (operands[0]) && REG_P (operands[3])





















+     && INTVAL (operands[5]) == riscv_vector::VLMAX))





















+      {





















+ emit_move_insn (operands[0], operands[3]);





















+ DONE;





















+      }





















+





















+    FAIL;





















+  }





















+  [(set_attr "type" "vldm,vstm,vmalu,vmalu,vmalu")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_store<mode>"





















+  [(set (match_operand:V 0 "memory_operand"                 "+m")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")





















+      (match_operand 3 "vector_length_operand"    "   rK")





















+      (match_operand 4 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operand:V 2 "register_operand"         "    vr")





















+   (match_dup 0)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vse.v\t%2,%0%p1"





















+  [(set_attr "type" "vste")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "avl_type_idx") (const_int 4))





















+   (set_attr "vl_op_idx" "3")])





















+





















+(define_insn "@pred_th_strided_load<mode>"





















+  [(set (match_operand:V 0 "register_operand"              "=vr,    vr,    vd,    vr,    vr,    vd")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm,    vmWc1,   Wc1,    vm")





















+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK,       rK,    rK,    rK")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i,        i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i,        i,     i,     i")





















+      (match_operand 8 "const_int_operand"        "    i,     i,     i,        i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand:V 3 "memory_operand"         "     m,     m,     m,    m,     m,     m")





















+      (match_operand 4 "<V:stride_predicate>"     "<V:stride_load_constraint>")] UNSPEC_STRIDED)





















+   (match_operand:V 2 "vector_merge_operand"      "     0,    vu,    vu,    0,    vu,    vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+  vlse.v\t%0,%3,%z4%p1





















+  vlse.v\t%0,%3,%z4





















+  vlse.v\t%0,%3,%z4,%1.t





















+  vle.v\t%0,%3%p1





















+  vle.v\t%0,%3





















+  vle.v\t%0,%3,%1.t"





















+  [(set_attr "type" "vlds")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_strided_store<mode>"





















+  [(set (match_operand:V 0 "memory_operand"                 "+m,    m")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,    vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK,       rK")





















+      (match_operand 5 "const_int_operand"        "    i,        i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand 2 "<V:stride_predicate>"     "<V:stride_store_constraint>")





















+      (match_operand:V 3 "register_operand"       "   vr,       vr")] UNSPEC_STRIDED)





















+   (match_dup 0)))]





















+  "TARGET_XTHEADVECTOR"





















+  "@





















+  vsse.v\t%3,%0,%z2%p1





















+  vse.v\t%3,%0%p1"





















+  [(set_attr "type" "vsts")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_same_eew"





















+  [(set (match_operand:V 0 "register_operand"             "=vd, vr,vd, vr")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")





















+      (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")





















+      (match_operand 6 "const_int_operand"         "  i,  i, i,  i")





















+      (match_operand 7 "const_int_operand"         "  i,  i, i,  i")





















+      (match_operand 8 "const_int_operand"         "  i,  i, i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)





















+   (match_operand:V 2 "vector_merge_operand"       " vu, vu, 0,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; DEST eew is greater than SOURCE eew.





















+(define_insn "@pred_th_indexed_<order>load<mode>_x2_greater_eew"





















+  [(set (match_operand:VEEWEXT2 0 "register_operand"                    "=&vr,  &vr")





















+ (if_then_else:VEEWEXT2





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "    i,    i")





















+      (match_operand 7 "const_int_operand"                      "    i,    i")





















+      (match_operand 8 "const_int_operand"                      "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWEXT2





















+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_DOUBLE_TRUNC> 4 "register_operand" "   vr,   vr")] ORDER)





















+   (match_operand:VEEWEXT2 2 "vector_merge_operand"             "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x4_greater_eew"





















+  [(set (match_operand:VEEWEXT4 0 "register_operand"                    "=&vr,  &vr")





















+ (if_then_else:VEEWEXT4





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "    i,    i")





















+      (match_operand 7 "const_int_operand"                      "    i,    i")





















+      (match_operand 8 "const_int_operand"                      "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWEXT4





















+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_QUAD_TRUNC> 4 "register_operand"   "   vr,   vr")] ORDER)





















+   (match_operand:VEEWEXT4 2 "vector_merge_operand"             "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x8_greater_eew"





















+  [(set (match_operand:VEEWEXT8 0 "register_operand"                    "=&vr,  &vr")





















+ (if_then_else:VEEWEXT8





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  "   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "    i,    i")





















+      (match_operand 7 "const_int_operand"                      "    i,    i")





















+      (match_operand 8 "const_int_operand"                      "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWEXT8





















+     [(match_operand 3 "pmode_reg_or_0_operand"                 "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_OCT_TRUNC> 4 "register_operand"    "   vr,   vr")] ORDER)





















+   (match_operand:VEEWEXT8 2 "vector_merge_operand"             "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; DEST eew is smaller than SOURCE eew.





















+(define_insn "@pred_th_indexed_<order>load<mode>_x2_smaller_eew"





















+  [(set (match_operand:VEEWTRUNC2 0 "register_operand"               "=vd, vd, vr, vr,  &vr,  &vr")





















+ (if_then_else:VEEWTRUNC2





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"             " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                    "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWTRUNC2





















+     [(match_operand 3 "pmode_reg_or_0_operand"               " rJ, rJ, rJ, rJ,   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_DOUBLE_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)





















+   (match_operand:VEEWTRUNC2 2 "vector_merge_operand"         " vu,  0, vu,  0,   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x4_smaller_eew"





















+  [(set (match_operand:VEEWTRUNC4 0 "register_operand"             "=vd, vd, vr, vr,  &vr,  &vr")





















+ (if_then_else:VEEWTRUNC4





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWTRUNC4





















+     [(match_operand 3 "pmode_reg_or_0_operand"             " rJ, rJ, rJ, rJ,   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_QUAD_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)





















+   (match_operand:VEEWTRUNC4 2 "vector_merge_operand"       " vu,  0, vu,  0,   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<mode>_x8_smaller_eew"





















+  [(set (match_operand:VEEWTRUNC8 0 "register_operand"            "=vd, vd, vr, vr,  &vr,  &vr")





















+ (if_then_else:VEEWTRUNC8





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"          " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"             " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                 "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VEEWTRUNC8





















+     [(match_operand 3 "pmode_reg_or_0_operand"            " rJ, rJ, rJ, rJ,   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:<VINDEX_OCT_EXT> 4 "register_operand" "  0,  0,  0,  0,   vr,   vr")] ORDER)





















+   (match_operand:VEEWTRUNC8 2 "vector_merge_operand"      " vu,  0, vu,  0,   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxe.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vld<order>x")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO64:mode><RATIO64I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO64I 2 "register_operand" "  vr")





















+    (match_operand:RATIO64 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO64:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO32:mode><RATIO32I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO32I 2 "register_operand" "  vr")





















+    (match_operand:RATIO32 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO32:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO16:mode><RATIO16I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO16I 2 "register_operand" "  vr")





















+    (match_operand:RATIO16 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO16:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO8:mode><RATIO8I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO8I 2 "register_operand" "  vr")





















+    (match_operand:RATIO8 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO8:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO4:mode><RATIO4I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")





















+    (match_operand:RATIO4I 2 "register_operand" "  vr")





















+    (match_operand:RATIO4 3 "register_operand"  "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO4:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO2:mode><RATIO2I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")





















+    (match_operand:RATIO2I 2 "register_operand"  "  vr")





















+    (match_operand:RATIO2 3 "register_operand"   "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO2:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<RATIO1:mode><RATIO1:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"       "  rJ")





















+    (match_operand:RATIO1 2 "register_operand"   "  vr")





















+    (match_operand:RATIO1 3 "register_operand"    "  vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xe.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vstux")





















+   (set_attr "mode" "<RATIO1:MODE>")])





















+





















+(define_insn "@pred_th_popcount<VB:mode><P:mode>"





















+  [(set (match_operand:P 0 "register_operand"               "=r")





















+ (popcount:P





















+   (unspec:VB





















+     [(and:VB





















+        (match_operand:VB 1 "vector_mask_operand" "vmWc1")





















+        (match_operand:VB 2 "register_operand"    "   vr"))





















+      (match_operand 3 "vector_length_operand"    "   rK")





















+      (match_operand 4 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmpopc.m\t%0,%2%p1"





















+  [(set_attr "type" "vmpop")





















+   (set_attr "mode" "<VB:MODE>")])





















+





















+(define_insn "@pred_th_ffs<VB:mode><P:mode>"





















+  [(set (match_operand:P 0 "register_operand"                 "=r")





















+ (plus:P





















+   (ffs:P





















+     (unspec:VB





















+       [(and:VB





















+          (match_operand:VB 1 "vector_mask_operand" "vmWc1")





















+          (match_operand:VB 2 "register_operand"    "   vr"))





















+        (match_operand 3 "vector_length_operand"    "   rK")





















+        (match_operand 4 "const_int_operand"        "    i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))





















+   (const_int -1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmfirst.m\t%0,%2%p1"





















+  [(set_attr "type" "vmffs")





















+   (set_attr "mode" "<VB:MODE>")])





















+





















+(define_insn "@pred_th_narrow_fcvt_x<v_su>_f<mode>"





















+  [(set (match_operand:<VNCONVERT> 0 "register_operand"        "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<VNCONVERT>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"          " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"              "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:<VNCONVERT>





















+      [(match_operand:V_VLSF 3 "register_operand"       "  vd,  vd,  vr,  vr,   vr,   vr")] VFCVTS)





















+   (match_operand:<VNCONVERT> 2 "vector_merge_operand"  " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfncvt.x<v_su>.f.v\t%0,%3%p1"





















+  [(set_attr "type" "vfncvtftoi")





















+   (set_attr "mode" "<VNCONVERT>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_narrow_<float_cvt><mode>"





















+  [(set (match_operand:<VNCONVERT> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<VNCONVERT>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"         " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float:<VNCONVERT>





















+      (match_operand:VWCONVERTI 3 "register_operand"   "  vd,  vd,  vr,  vr,   vr,   vr"))





















+   (match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfncvt.f.x<u>.v\t%0,%3%p1"





















+  [(set_attr "type" "vfncvtitof")





















+   (set_attr "mode" "<VNCONVERT>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_narrow_<optab><mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, vd, vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (truncate:<V_DOUBLE_TRUNC>





















+     (any_shiftrt:VWEXTI





















+      (match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")





















+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vn<insn>.v%o4\t%0,%3,%v4%p1"





















+  [(set_attr "type" "vnshift")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+(define_insn "@pred_th_narrow_<optab><mode>_scalar"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (truncate:<V_DOUBLE_TRUNC>





















+     (any_shiftrt:VWEXTI





















+      (match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")





















+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vn<insn>.v%o4\t%0,%3,%4%p1"





















+  [(set_attr "type" "vnshift")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+(define_insn "@pred_th_trunc<mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (truncate:<V_DOUBLE_TRUNC>





















+     (match_operand:VWEXTI 3 "register_operand"                 "  vd,  vd,  vr,  vr,   vr,   vr"))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnsrl.vx\t%0,%3,x0%p1"





















+  [(set_attr "type" "vnshift")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_trunc<mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"       "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"           " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 4 "vector_length_operand"              " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 5 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 6 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                  "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (float_truncate:<V_DOUBLE_TRUNC>





















+      (match_operand:VWEXTF_ZVFHMIN 3 "register_operand"            "  vd,  vd,  vr,  vr,   vr,   vr"))





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfncvt.f.f.v\t%0,%3%p1"





















+  [(set_attr "type" "vfncvtftof")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_fault_load<mode>"





















+  [(set (match_operand:V 0 "register_operand"              "=vd,    vd,    vr,    vr")





















+ (if_then_else:V





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "   vm,    vm,   Wc1,   Wc1")





















+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK,    rK")





















+      (match_operand 5 "const_int_operand"        "    i,     i,     i,     i")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V





















+     [(match_operand:V 3 "memory_operand"         "    m,     m,     m,     m")] UNSPEC_VLEFF)





















+   (match_operand:V 2 "vector_merge_operand"      "   vu,     0,    vu,     0")))





















+   (set (reg:SI VL_REGNUM)





















+   (unspec:SI





















+     [(if_then_else:V





















+        (unspec:<VM>





















+ [(match_dup 1) (match_dup 4) (match_dup 5)





















+ (match_dup 6) (match_dup 7)





















+ (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+        (unspec:V [(match_dup 3)] UNSPEC_VLEFF)





















+        (match_dup 2))] UNSPEC_MODIFY_VL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vleff.v\t%0,%3%p1"





















+  [(set_attr "type" "vldff")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_unit_strided_load<mode>"





















+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")





















+ (if_then_else:VT





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")





















+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")





















+      (match_operand 5 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VT





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)





















+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlseg<nf>e.v\t%0,(%z3)%p1"





















+  [(set_attr "type" "vlsegde")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_unit_strided_store<mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+       (match_operand 3 "vector_length_operand"    "   rK")





















+       (match_operand 4 "const_int_operand"        "    i")





















+       (reg:SI VL_REGNUM)





















+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand:VT 2 "register_operand"         "   vr")





















+    (mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsseg<nf>e.v\t%2,(%z1)%p0"





















+  [(set_attr "type" "vssegte")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_strided_load<mode>"





















+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")





















+ (if_then_else:VT





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")





















+      (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 8 "const_int_operand"        "    i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VT





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (mem:BLK (scratch))] UNSPEC_STRIDED)





















+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlsseg<nf>e.v\t%0,(%z3),%z4%p1"





















+  [(set_attr "type" "vlsegds")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_strided_store<mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+      [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+       (match_operand 4 "vector_length_operand"    "   rK")





















+       (match_operand 5 "const_int_operand"        "    i")





















+       (reg:SI VL_REGNUM)





















+       (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand 2 "pmode_reg_or_0_operand"      "   rJ")





















+    (match_operand:VT 3 "register_operand"         "   vr")





















+    (mem:BLK (scratch))] UNSPEC_STRIDED))]





















+  "TARGET_XTHEADVECTOR"





















+  "vssseg<nf>e.v\t%3,(%z1),%z2%p0"





















+  [(set_attr "type" "vssegts")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_fault_load<mode>"





















+  [(set (match_operand:VT 0 "register_operand"             "=vr,    vr,    vd")





















+ (if_then_else:VT





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")





















+      (match_operand 4 "vector_length_operand"    "   rK,    rK,    rK")





















+      (match_operand 5 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 6 "const_int_operand"        "    i,     i,     i")





















+      (match_operand 7 "const_int_operand"        "    i,     i,     i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:VT





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")





















+      (mem:BLK (scratch))] UNSPEC_VLEFF)





















+   (match_operand:VT 2 "vector_merge_operand"     "    0,    vu,    vu")))





















+   (set (reg:SI VL_REGNUM)





















+        (unspec:SI





















+          [(if_then_else:VT





















+      (unspec:<VM>





















+        [(match_dup 1) (match_dup 4) (match_dup 5)





















+         (match_dup 6) (match_dup 7)





















+         (reg:SI VL_REGNUM)





















+         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+      (unspec:VT





















+         [(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)





















+      (match_dup 2))] UNSPEC_MODIFY_VL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlseg<nf>eff.v\t%0,(%z3)%p1"





















+  [(set_attr "type" "vlsegdff")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V1T:mode><RATIO64I:mode>"





















+  [(set (match_operand:V1T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V1T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V1T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO64I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V1T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V1T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V2T:mode><RATIO32I:mode>"





















+  [(set (match_operand:V2T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V2T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V2T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO32I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V2T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V2T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V4T:mode><RATIO16I:mode>"





















+  [(set (match_operand:V4T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V4T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V4T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO16I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V4T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V4T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V8T:mode><RATIO8I:mode>"





















+  [(set (match_operand:V8T 0 "register_operand"           "=&vr,  &vr")





















+ (if_then_else:V8T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V8T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO8I 4 "register_operand"     "   vr,   vr")] ORDER)





















+   (match_operand:V8T 2 "vector_merge_operand"    "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V8T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V16T:mode><RATIO4I:mode>"





















+  [(set (match_operand:V16T 0 "register_operand"          "=&vr,  &vr")





















+ (if_then_else:V16T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V16T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO4I 4 "register_operand"    "   vr,   vr")] ORDER)





















+   (match_operand:V16T 2 "vector_merge_operand"   "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V16T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<order>load<V32T:mode><RATIO2I:mode>"





















+  [(set (match_operand:V32T 0 "register_operand"          "=&vr,  &vr")





















+ (if_then_else:V32T





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"    "   rK,   rK")





















+      (match_operand 6 "const_int_operand"        "    i,    i")





















+      (match_operand 7 "const_int_operand"        "    i,    i")





















+      (match_operand 8 "const_int_operand"        "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:V32T





















+     [(match_operand 3 "pmode_reg_or_0_operand"   "   rJ,   rJ")





















+      (mem:BLK (scratch))





















+      (match_operand:RATIO2I 4 "register_operand"    "   vr,   vr")] ORDER)





















+   (match_operand:V32T 2 "vector_merge_operand"   "   vu,    0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vlxseg<nf>e.v\t%0,(%z3),%4%p1"





















+  [(set_attr "type" "vlsegd<order>x")





















+   (set_attr "mode" "<V32T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V1T:mode><RATIO64I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO64I 2 "register_operand"       "   vr")





















+    (match_operand:V1T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V1T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V2T:mode><RATIO32I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO32I 2 "register_operand"       "   vr")





















+    (match_operand:V2T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V2T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V4T:mode><RATIO16I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO16I 2 "register_operand"       "   vr")





















+    (match_operand:V4T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V4T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V8T:mode><RATIO8I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO8I 2 "register_operand"       "   vr")





















+    (match_operand:V8T 3 "register_operand"       "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V8T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V16T:mode><RATIO4I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO4I 2 "register_operand"      "   vr")





















+    (match_operand:V16T 3 "register_operand"      "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0"





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V16T:MODE>")])





















+





















+(define_insn "@pred_th_indexed_<th_order>store<V32T:mode><RATIO2I:mode>"





















+  [(set (mem:BLK (scratch))





















+ (unspec:BLK





















+   [(unspec:<VM>





















+     [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")





















+      (match_operand 4 "vector_length_operand"    "   rK")





















+      (match_operand 5 "const_int_operand"        "    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+    (match_operand 1 "pmode_reg_or_0_operand"     "   rJ")





















+    (match_operand:RATIO2I 2 "register_operand"      "   vr")





















+    (match_operand:V32T 3 "register_operand"      "   vr")] ORDER))]





















+  "TARGET_XTHEADVECTOR"





















+  "vs<th_order>xseg<nf>e.v\t%3,(%z1),%2%p0";





















+  [(set_attr "type" "vssegtux")





















+   (set_attr "mode" "<V32T:MODE>")])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")





















+ (if_then_else:V_VLSF





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float_unop_neg:V_VLSF





















+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))





















+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfsgnjn.vv\t%0,%3,%3%p1"





















+  [(set_attr "type" "<float_insn_type>")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")





















+ (if_then_else:V_VLSF





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float_unop_abs:V_VLSF





















+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))





















+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vfsgnjx.vv\t%0,%3,%3%p1"





















+  [(set_attr "type" "<float_insn_type>")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSI 0 "register_operand"          "=vd,vd, vr, vr")





















+ (if_then_else:V_VLSI





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        " i, i,  i,  i")





















+      (match_operand 6 "const_int_operand"        " i, i,  i,  i")





















+      (match_operand 7 "const_int_operand"        " i, i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (not_unop:V_VLSI





















+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))





















+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnot.v\t%0,%3%p1"





















+  [(set_attr "type" "vialu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")





















+ (if_then_else:V_VLSI





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    "rK,rK, rK, rK")





















+      (match_operand 5 "const_int_operand" " i, i,  i,  i")





















+      (match_operand 6 "const_int_operand" " i, i,  i,  i")





















+      (match_operand 7 "const_int_operand" " i, i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (neg_unop:V_VLSI





















+     (match_operand:V_VLSI 3 "register_operand"       "vr,vr, vr, vr"))





















+   (match_operand:V_VLSI 2 "vector_merge_operand"     "vu, 0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vrsub.vx\t%0,%3,x0%p1"





















+  [(set_attr "type" "vialu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "@pred_th_<optab><mode>"





















+  [(set (match_operand:V_VLSF 0 "register_operand"           "=vd, vd, vr, vr")





















+ (if_then_else:V_VLSF





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand" " vm, vm,Wc1,Wc1")





















+      (match_operand 4 "vector_length_operand"    " rK, rK, rK, rK")





















+      (match_operand 5 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 6 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 7 "const_int_operand"        "  i,  i,  i,  i")





















+      (match_operand 8 "const_int_operand"        "  i,  i,  i,  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (any_float_unop:V_VLSF





















+     (match_operand:V_VLSF 3 "register_operand"       " vr, vr, vr, vr"))





















+   (match_operand:V_VLSF 2 "vector_merge_operand"     " vu,  0, vu,  0")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vf<insn>.v\t%0,%3%p1"





















+  [(set_attr "type" "<float_insn_type>")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "ta") (symbol_ref "riscv_vector::get_ta(operands[5])"))





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_narrow_clip<v_su><mode>"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd,&vd, &vr, &vr,&vd, &vr,  &vr,  &vr, &vd, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm,vm,Wc1,Wc1,vm,Wc1,vmWc1,vmWc1, vm,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK,rK, rK, rK,rK, rK,   rK,   rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (match_operand 9 "const_int_operand"                      "  i, i,  i,  i, i,  i,    i,    i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:<V_DOUBLE_TRUNC>





















+     [(match_operand:VWEXTI 3 "register_operand"                " vr,vr, vr, vr, vd,  vr,   vr,   vr,  vd,  vr,   vr,   vr")





















+      (match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand"  "  vd, vd,  vr,  vr,vr, vr,   vr,   vr, vk, vk,   vk,   vk")] VNCLIP)





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     "  vd,vu,  vr, vu,vu, vu,   vu,    vr, vu, vu,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnclip<v_su>.v%o4\t%0,%3,%v4%p1"





















+  [(set_attr "type" "vnclip")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+(define_insn "@pred_th_narrow_clip<v_su><mode>_scalar"





















+  [(set (match_operand:<V_DOUBLE_TRUNC> 0 "register_operand"           "=&vd, &vd, &vr, &vr,  &vr,  &vr")





















+ (if_then_else:<V_DOUBLE_TRUNC>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"               " vm, vm,Wc1,Wc1,vmWc1,vmWc1")





















+      (match_operand 5 "vector_length_operand"                  " rK, rK, rK, rK,   rK,   rK")





















+      (match_operand 6 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 7 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 8 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (match_operand 9 "const_int_operand"                      "  i,  i,  i,  i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI VXRM_REGNUM)] UNSPEC_VPREDICATE)





















+   (unspec:<V_DOUBLE_TRUNC>





















+     [(match_operand:VWEXTI 3 "register_operand"                "  vd,  vd,  vr,  vr,   vr,   vr")





















+      (match_operand 4 "pmode_reg_or_uimm5_operand"             " rK, rK, rK, rK,   rK,   rK")] VNCLIP)





















+   (match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand"     " vu,  vd, vu,  vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR"





















+  "vnclip<v_su>.v%o4\t%0,%3,%4%p1"





















+  [(set_attr "type" "vnclip")





















+   (set_attr "mode" "<V_DOUBLE_TRUNC>")])





















+





















+;; Float Reduction Sum (vfred[ou]sum.vs)





















+(define_insn "@pred_th_<th_reduc_op><mode>"





















+  [(set (match_operand:<V_LMUL1>           0 "register_operand"      "=vr,vr")





















+ (unspec:<V_LMUL1>





















+   [(unspec:<VM>





















+     [(match_operand:<VM>          1 "vector_mask_operand"   "vmWc1,vmWc1")





















+      (match_operand               5 "vector_length_operand" "   rK,   rK")





















+      (match_operand               6 "const_int_operand"     "    i,    i")





















+      (match_operand               7 "const_int_operand"     "    i,    i")





















+      (match_operand               8 "const_int_operand"     "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+           (unspec:<V_LMUL1> [





















+             (match_operand:V_VLSF        3 "register_operand"      "   vr,   vr")





















+             (match_operand:<V_LMUL1>     4 "register_operand"      "   vr,   vr")





















+           ] ANY_FREDUC_SUM)





















+    (match_operand:<V_LMUL1>       2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"





















+  [(set_attr "type" "vfred<order>")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+;; Float Widen Reduction Sum (vfwred[ou]sum.vs)





















+(define_insn "@pred_th_<th_reduc_op><mode>"





















+  [(set (match_operand:<V_EXT_LMUL1>         0 "register_operand"      "=&vr, &vr")





















+ (unspec:<V_EXT_LMUL1>





















+   [(unspec:<VM>





















+     [(match_operand:<VM>           1 "vector_mask_operand"   "vmWc1,vmWc1")





















+      (match_operand                5 "vector_length_operand" "   rK,   rK")





















+      (match_operand                6 "const_int_operand"     "    i,    i")





















+      (match_operand                7 "const_int_operand"     "    i,    i")





















+      (match_operand                8 "const_int_operand"     "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)





















+      (reg:SI FRM_REGNUM)] UNSPEC_VPREDICATE)





















+           (unspec:<V_EXT_LMUL1> [





















+      (match_operand:VF_HS          3 "register_operand"      "   vr,   vr")





















+      (match_operand:<V_EXT_LMUL1>  4 "register_operand"      "  vr0,  vr0")





















+           ] ANY_FWREDUC_SUM)





















+    (match_operand:<V_EXT_LMUL1>    2 "vector_merge_operand"  "   vu,    0")] UNSPEC_REDUC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vf<th_reduc_op>.vs\t%0,%3,%4%p1"





















+  [(set_attr "type" "vfwred<order>")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "frm_mode")





















+ (symbol_ref "riscv_vector::get_frm_mode (operands[8])"))])





















+





















+(define_insn "@pred_th_madc<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")





















+ (unspec:<VM>





















+    [(plus:VI





















+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")





















+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))





















+     (match_operand:<VM> 3 "register_operand"    "  vm,  vm,  vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand" "  rK,  rK,  rK")





















+        (match_operand 5 "const_int_operand"     "   i,   i,   i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.v%o2m\t%0,%1,%v2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_msbc<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI





















+      (match_operand:VI 1 "register_operand"     "  vr")





















+      (match_operand:VI 2 "register_operand"     " vr"))





















+     (match_operand:<VM> 3 "register_operand"    " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand" " rK")





















+        (match_operand 5 "const_int_operand"     "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vvm\t%0,%1,%2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_madc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "register_operand" "  r"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vxm\t%0,%1,%2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_msbc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_expand "@pred_th_madc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (match_operand:<VM> 3 "register_operand")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand")





















+        (match_operand 5 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[4],





















+ <MODE>mode,





















+ riscv_vector::simm5_p (operands[2]),





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_madc<mode> (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4], operands[5]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[5])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_madc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "*pred_th_madc<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (match_operand:<VM> 3 "register_operand"          " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"       " rK")





















+        (match_operand 5 "const_int_operand"           "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_expand "@pred_th_msbc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (match_operand:<VM> 3 "register_operand")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand")





















+        (match_operand 5 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[4],





















+ <MODE>mode,





















+ false,





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_msbc<mode> (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4], operands[5]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[5])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_msbc<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (match_operand:<VM> 3 "register_operand"     " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"  " rK")





















+        (match_operand 5 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "*pred_th_msbc<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (match_operand:<VM> 3 "register_operand"          " vm")





















+     (unspec:<VM>





















+       [(match_operand 4 "vector_length_operand"       " rK")





















+        (match_operand 5 "const_int_operand"           "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vxm\t%0,%1,%z2,%3"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "4")





















+   (set (attr "avl_type_idx") (const_int 5))])





















+





















+(define_insn "@pred_th_madc<mode>_overflow"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr, &vr, &vr")





















+ (unspec:<VM>





















+    [(plus:VI





















+      (match_operand:VI 1 "register_operand"     "  %vr,  vr,  vr")





















+      (match_operand:VI 2 "vector_arith_operand" "vrvi,  vr,  vi"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand" "  rK,  rK,  rK")





















+        (match_operand 4 "const_int_operand"     "   i,   i,   i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.v%o2\t%0,%1,%v2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "@pred_th_msbc<mode>_overflow"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI





















+      (match_operand:VI 1 "register_operand"     "   vr")





















+      (match_operand:VI 2 "register_operand"     "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand" "  rK")





















+        (match_operand 4 "const_int_operand"     "   i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vv\t%0,%1,%2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "@pred_th_madc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "@pred_th_msbc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_QHS





















+      (vec_duplicate:VI_QHS





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_QHS 1 "register_operand"  "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_expand "@pred_th_madc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand")





















+        (match_operand 4 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[3],





















+ <MODE>mode,





















+ riscv_vector::simm5_p (operands[2]),





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_madc<mode>_overflow (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[4])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_madc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "*pred_th_madc<mode>_overflow_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")





















+ (unspec:<VM>





















+    [(plus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"       " rK")





















+        (match_operand 4 "const_int_operand"           "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmadc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_expand "@pred_th_msbc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_int_operand"))





















+      (match_operand:VI_D 1 "register_operand"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand")





















+        (match_operand 4 "const_int_operand")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+{





















+  if (riscv_vector::sew64_scalar_helper (





















+ operands,





















+ /* scalar op */&operands[2],





















+ /* vl */operands[3],





















+ <MODE>mode,





















+ false,





















+ [] (rtx *operands, rtx boardcast_scalar) {





















+   emit_insn (gen_pred_th_msbc<mode>_overflow (operands[0], operands[1],





















+        boardcast_scalar, operands[3], operands[4]));





















+        },





















+ (riscv_vector::avl_type) INTVAL (operands[4])))





















+    DONE;





















+})





















+





















+(define_insn "*pred_th_msbc<mode>_overflow_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"         "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (match_operand:<VEL> 2 "reg_or_0_operand" " rJ"))





















+      (match_operand:VI_D 1 "register_operand"    "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"  " rK")





















+        (match_operand 4 "const_int_operand"      "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "*pred_th_msbc<mode>_overflow_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=&vr")





















+ (unspec:<VM>





















+    [(minus:VI_D





















+      (vec_duplicate:VI_D





















+        (sign_extend:<VEL>





















+          (match_operand:<VSUBEL> 2 "reg_or_0_operand" " rJ")))





















+      (match_operand:VI_D 1 "register_operand"         "  vr"))





















+     (unspec:<VM>





















+       [(match_operand 3 "vector_length_operand"      " rK")





















+        (match_operand 4 "const_int_operand"          "  i")





















+        (reg:SI VL_REGNUM)





















+        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmsbc.vx\t%0,%1,%z2"





















+  [(set_attr "type" "vicalu")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "vl_op_idx" "3")





















+   (set (attr "avl_type_idx") (const_int 4))])





















+





















+(define_insn "*th_vsetvl<mode>"





















+  [(set (match_operand:P 0 "register_operand" "=r")





















+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")





















+    (match_operand 2 "const_int_operand" "i")





















+    (match_operand 3 "const_int_operand" "i")





















+    (match_operand 4 "const_int_operand" "i")





















+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))





















+   (set (reg:SI VL_REGNUM)





















+ (unspec:SI [(match_dup 1)





















+     (match_dup 2)





















+     (match_dup 3)] UNSPEC_VSETVL))





















+   (set (reg:SI VTYPE_REGNUM)





















+ (unspec:SI [(match_dup 2)





















+     (match_dup 3)





















+     (match_dup 4)





















+     (match_dup 5)] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsetvli\t%0,%1,e%2,%m3"





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))





















+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))





















+   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))





















+   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])





















+





















+;; vsetvl zero,zero,vtype instruction.





















+;; This pattern has no side effects and does not set X0 register.





















+(define_insn "*th_vsetvl_vtype_change_only"





















+  [(set (reg:SI VTYPE_REGNUM)





















+ (unspec:SI





















+   [(match_operand 0 "const_int_operand" "i")





















+    (match_operand 1 "const_int_operand" "i")





















+    (match_operand 2 "const_int_operand" "i")





















+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsetvli\tzero,zero,e%0,%m1"





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "SI")





















+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))





















+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))





















+   (set (attr "ta") (symbol_ref "INTVAL (operands[2])"))





















+   (set (attr "ma") (symbol_ref "INTVAL (operands[3])"))])





















+





















+;; vsetvl zero,rs1,vtype instruction.





















+;; The reason we need this pattern since we should avoid setting X0 register





















+;; in vsetvl instruction pattern.





















+(define_insn "*th_vsetvl_discard_result<mode>"





















+  [(set (reg:SI VL_REGNUM)





















+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")





















+     (match_operand 1 "const_int_operand" "i")





















+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))





















+   (set (reg:SI VTYPE_REGNUM)





















+ (unspec:SI [(match_dup 1)





















+     (match_dup 2)





















+     (match_operand 3 "const_int_operand" "i")





















+     (match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "vsetvli\tzero,%0,e%1,%m2"





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "<MODE>")





















+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))





















+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))





















+   (set (attr "ta") (symbol_ref "INTVAL (operands[3])"))





















+   (set (attr "ma") (symbol_ref "INTVAL (operands[4])"))])





















+





















+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.





















+;; Since we have many optmization passes from "expand" to "reload_completed",





















+;; such pattern can allow us gain benefits of these optimizations.





















+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"





















+  [(set (match_operand:P 0 "register_operand" "=r")





















+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")





















+    (match_operand 2 "const_int_operand" "i")





















+    (match_operand 3 "const_int_operand" "i")





















+    (match_operand 4 "const_int_operand" "i")





















+    (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))]





















+  "TARGET_XTHEADVECTOR"





















+  "#"





















+  "&& epilogue_completed"





















+  [(parallel





















+    [(set (match_dup 0)





















+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)





















+      (match_dup 4) (match_dup 5)] UNSPEC_VSETVL))





















+     (set (reg:SI VL_REGNUM)





















+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))





















+     (set (reg:SI VTYPE_REGNUM)





















+   (unspec:SI [(match_dup 2) (match_dup 3) (match_dup 4)





















+       (match_dup 5)] UNSPEC_VSETVL))])]





















+  ""





















+  [(set_attr "type" "vsetvl")





















+   (set_attr "mode" "SI")])





















+





















+(define_insn "*pred_th_cmp<mode>_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"        "   0")





















+      (match_operand 5 "vector_length_operand"        "  rK")





















+      (match_operand 6 "const_int_operand"            "   i")





















+      (match_operand 7 "const_int_operand"            "   i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_ltge_operator"





















+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")





















+       (match_operand:V_VLSI 4 "vector_arith_operand"     "vrvi")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_arith_operand"      "   vr,   vr,   vi,   vi")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "   0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_arith_operand"      " vrvi, vrvi,    vr,    vr, vrvi,    vr,    vr, vrvi, vrvi")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_ltge<mode>_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"        "   0")





















+      (match_operand 5 "vector_length_operand"        "  rK")





















+      (match_operand 6 "const_int_operand"            "   i")





















+      (match_operand 7 "const_int_operand"            "   i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "ltge_operator"





















+      [(match_operand:V_VLSI 3 "register_operand"         "  vr")





















+       (match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.v%o4\t%0,%3,%v4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_ltge<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr,   &vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,   vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  "   vr,   vr,   vj,   vj")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_ltge<mode>_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "ltge_operator"





















+      [(match_operand:V_VLSI 4 "register_operand"          "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")





















+       (match_operand:V_VLSI 5 "vector_neg_arith_operand"  " vrvj, vrvj,    vr,    vr, vrvj,    vr,    vr, vrvj, vrvj")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.v%o5\t%0,%4,%v5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"          "  0")





















+      (match_operand 5 "vector_length_operand"          " rK")





















+      (match_operand 6 "const_int_operand"              "  i")





















+      (match_operand 7 "const_int_operand"              "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_QHS 3 "register_operand"       " vr")





















+       (vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 4 "register_operand"      "  r"))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")





















+       (vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_QHS 4 "register_operand"   "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"           "  0")





















+      (match_operand 5 "vector_length_operand"           " rK")





















+      (match_operand 6 "const_int_operand"               "  i")





















+      (match_operand 7 "const_int_operand"               "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 4 "register_operand"       "  r"))





















+       (match_operand:V_VLSI_QHS 3 "register_operand"        " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))





















+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_eqne<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_QHS





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))





















+       (match_operand:V_VLSI_QHS 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"           "  0")





















+      (match_operand 5 "vector_length_operand"           " rK")





















+      (match_operand 6 "const_int_operand"               "  i")





















+      (match_operand 7 "const_int_operand"               "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 3 "register_operand"          " vr")





















+       (vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 4 "register_operand"       "  r"))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"           "  0")





















+      (match_operand 5 "vector_length_operand"           " rK")





















+      (match_operand 6 "const_int_operand"               "  i")





















+      (match_operand 7 "const_int_operand"               "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 4 "register_operand"       "  r"))





















+       (match_operand:V_VLSI_D 3 "register_operand"          " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"             "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"   "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"      "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"          "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"     "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"  "    r,    r,    r,    r,    r"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"     "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r"))





















+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_eqne<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (match_operand:<VEL> 5 "register_operand"     "    r,    r,    r,    r,    r"))





















+       (match_operand:V_VLSI_D 4 "register_operand"        "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_extended_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"          "  0")





















+      (match_operand 5 "vector_length_operand"          " rK")





















+      (match_operand 6 "const_int_operand"              "  i")





















+      (match_operand 7 "const_int_operand"              "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 3 "register_operand"         " vr")





















+       (vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 4 "register_operand" "  r")))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_extended_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "comparison_except_eqge_operator"





















+      [(match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_extended_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"            "  0")





















+      (match_operand 5 "vector_length_operand"            " rK")





















+      (match_operand 6 "const_int_operand"                "  i")





















+      (match_operand 7 "const_int_operand"                "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 4 "register_operand"   "  r")))





















+       (match_operand:V_VLSI_D 3 "register_operand"           " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vms%B2.vx\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_extended_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                 "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r")))





















+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_extended_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"       "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"          "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"              "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSI_D





















+         (sign_extend:<VEL>





















+           (match_operand:<VSUBEL> 5 "register_operand" "    r,    r,    r,    r,    r")))





















+       (match_operand:V_VLSI_D 4 "register_operand"         "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"         "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vms%B3.vx\t%0,%4,%5%p1"





















+  [(set_attr "type" "vicmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")





















+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vmf%B3.vv\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_narrow_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"               "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"          "  0")





















+      (match_operand 5 "vector_length_operand"          " rK")





















+      (match_operand 6 "const_int_operand"              "  i")





















+      (match_operand 7 "const_int_operand"              "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "signed_order_operator"





















+      [(match_operand:V_VLSF 3 "register_operand"           " vr")





















+       (match_operand:V_VLSF 4 "register_operand"           " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmf%B2.vv\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,   &vr,   &vr,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,   vr,    vr,    vr,   vr,    vr,   vr,   vr")





















+       (match_operand:V_VLSF 5 "register_operand"      "   vr,   vr,    vr,    vr,   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,   vu,   vu,    vr,    vr,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vmf%B3.vv\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_cmp<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"         "  0")





















+      (match_operand 5 "vector_length_operand"         " rK")





















+      (match_operand 6 "const_int_operand"             "  i")





















+      (match_operand 7 "const_int_operand"             "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "signed_order_operator"





















+      [(match_operand:V_VLSF 3 "register_operand"      " vr")





















+       (vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 4 "register_operand"     "  f"))])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmf%B2.vf\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_cmp<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")





















+       (vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_cmp<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,  &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "signed_order_operator"





















+      [(match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")





















+       (vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+(define_insn "*pred_th_eqne<mode>_scalar_merge_tie_mask"





















+  [(set (match_operand:<VM> 0 "register_operand"              "=vm")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "register_operand"         "  0")





















+      (match_operand 5 "vector_length_operand"         " rK")





















+      (match_operand 6 "const_int_operand"             "  i")





















+      (match_operand 7 "const_int_operand"             "  i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 2 "equality_operator"





















+      [(vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 4 "register_operand"     "  f"))





















+       (match_operand:V_VLSF 3 "register_operand"      " vr")])





















+   (match_dup 1)))]





















+  "TARGET_XTHEADVECTOR"





















+  "vmf%B2.vf\t%0,%3,%4,v0.t"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")





















+   (set_attr "merge_op_idx" "1")





















+   (set_attr "vl_op_idx" "5")





















+   (set (attr "ma") (symbol_ref "riscv_vector::get_ma(operands[6])"))





















+   (set (attr "avl_type_idx") (const_int 7))])





















+





















+;; We don't use early-clobber for LMUL <= 1 to get better codegen.





















+(define_insn "*pred_th_eqne<mode>_scalar"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=&vr,   &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f"))





















+       (match_operand:V_VLSF 4 "register_operand"      "   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















+





















+;; We use early-clobber for source LMUL > dest LMUL.





















+(define_insn "*pred_th_eqne<mode>_scalar_narrow"





















+  [(set (match_operand:<VM> 0 "register_operand"                "=vm,   &vr,   &vr,  &vr,  &vr")





















+ (if_then_else:<VM>





















+   (unspec:<VM>





















+     [(match_operand:<VM> 1 "vector_mask_operand"      "    0,vmWc1,vmWc1,vmWc1,vmWc1")





















+      (match_operand 6 "vector_length_operand"         "   rK,   rK,   rK,   rK,   rK")





















+      (match_operand 7 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (match_operand 8 "const_int_operand"             "    i,    i,    i,    i,    i")





















+      (reg:SI VL_REGNUM)





















+      (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)





















+   (match_operator:<VM> 3 "equality_operator"





















+      [(vec_duplicate:V_VLSF





















+         (match_operand:<VEL> 5 "register_operand"     "    f,    f,    f,    f,    f"))





















+       (match_operand:V_VLSF 4 "register_operand"      "   vr,    vr,    vr,   vr,   vr")])





















+   (match_operand:<VM> 2 "vector_merge_operand"        "   vu,   vu,    vr,   vu,    vr")))]





















+  "TARGET_XTHEADVECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"





















+  "vmf%B3.vf\t%0,%4,%5%p1"





















+  [(set_attr "type" "vfcmp")





















+   (set_attr "mode" "<MODE>")])





















\ No newline at end of file





















diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md





















index 5f5f7b5b986..c0fc7a2441d 100644





















--- a/gcc/config/riscv/vector-iterators.md





















+++ b/gcc/config/riscv/vector-iterators.md





















@@ -109,11 +109,11 @@ (define_c_enum "unspecv" [





















])





















(define_mode_iterator VI [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -128,11 +128,11 @@ (define_mode_iterator VI [





















;; allow the instruction and mode to be matched during combine et al.





















(define_mode_iterator VF [





















  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")





















-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -140,11 +140,11 @@ (define_mode_iterator VF [





















(define_mode_iterator VF_ZVFHMIN [





















  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -271,16 +271,16 @@ (define_mode_iterator VLSF_ZVFHMIN [





















])





















(define_mode_iterator VEEWEXT2 [





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -290,10 +290,10 @@ (define_mode_iterator VEEWEXT2 [





















])





















(define_mode_iterator VEEWEXT4 [





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -311,59 +311,59 @@ (define_mode_iterator VEEWEXT8 [





















])





















(define_mode_iterator VEEWTRUNC2 [





















-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















  (RVVM4SI "TARGET_64BIT")





















  (RVVM2SI "TARGET_64BIT")





















  (RVVM1SI "TARGET_64BIT")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















])





















(define_mode_iterator VEEWTRUNC4 [





















-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM2HI "TARGET_64BIT")





















  (RVVM1HI "TARGET_64BIT")





















-  (RVVMF2HI "TARGET_64BIT")





















-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")





















+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















  (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















])





















(define_mode_iterator VEEWTRUNC8 [





















  (RVVM1QI "TARGET_64BIT")





















-  (RVVMF2QI "TARGET_64BIT")





















-  (RVVMF4QI "TARGET_64BIT")





















-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")





















+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")





















+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")





















])





















(define_mode_iterator VEI16 [





















-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -452,11 +452,11 @@ (define_mode_iterator VEI16 [





















])





















(define_mode_iterator VFULLI [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")





















@@ -509,17 +509,17 @@ (define_mode_iterator VFULLI [





















])





















(define_mode_iterator VI_QH [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















])





















(define_mode_iterator VI_QHS [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")





















  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")





















@@ -560,11 +560,11 @@ (define_mode_iterator VI_QHS [





















])





















(define_mode_iterator VI_QHS_NO_M8 [





















-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")





















  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")





















@@ -603,11 +603,11 @@ (define_mode_iterator VI_QHS_NO_M8 [





















(define_mode_iterator VF_HS [





















  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")





















-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")





















  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")





















@@ -638,12 +638,12 @@ (define_mode_iterator VF_HS_NO_M8 [





















  (RVVM4HF "TARGET_ZVFH")





















  (RVVM2HF "TARGET_ZVFH")





















  (RVVM1HF "TARGET_ZVFH")





















-  (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")





















  (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")





















@@ -674,11 +674,11 @@ (define_mode_iterator VF_HS_M8 [





















])





















(define_mode_iterator V_VLSI_QHS [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")





















  (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")





















@@ -756,27 +756,27 @@ (define_mode_iterator VFULLI_D [





















;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or





















;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.





















(define_mode_iterator RATIO64 [





















-  (RVVMF8QI "TARGET_MIN_VLEN > 32")





















-  (RVVMF4HI "TARGET_MIN_VLEN > 32")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM1DI "TARGET_VECTOR_ELEN_64")





















-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















])





















(define_mode_iterator RATIO32 [





















-  RVVMF4QI





















-  RVVMF2HI





















+  (RVVMF4QI "!TARGET_XTHEADVECTOR")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR")





















  RVVM1SI





















  (RVVM2DI "TARGET_VECTOR_ELEN_64")





















-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")





















])





















(define_mode_iterator RATIO16 [





















-  RVVMF2QI





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR")





















  RVVM1HI





















  RVVM2SI





















  (RVVM4DI "TARGET_VECTOR_ELEN_64")





















@@ -814,21 +814,21 @@ (define_mode_iterator RATIO1 [





















])





















(define_mode_iterator RATIO64I [





















-  (RVVMF8QI "TARGET_MIN_VLEN > 32")





















-  (RVVMF4HI "TARGET_MIN_VLEN > 32")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















])





















(define_mode_iterator RATIO32I [





















-  RVVMF4QI





















-  RVVMF2HI





















+  (RVVMF4QI "!TARGET_XTHEADVECTOR")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR")





















  RVVM1SI





















  (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















])





















(define_mode_iterator RATIO16I [





















-  RVVMF2QI





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR")





















  RVVM1HI





















  RVVM2SI





















  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















@@ -873,21 +873,21 @@ (define_mode_iterator V_WHOLE [





















])





















(define_mode_iterator V_FRACT [





















-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")





















-  (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















])





















(define_mode_iterator VWEXTI [





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -933,7 +933,7 @@ (define_mode_iterator VWEXTF_ZVFHMIN [





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -966,7 +966,7 @@ (define_mode_iterator VWEXTF [





















  (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -996,7 +996,7 @@ (define_mode_iterator VWEXTF [





















(define_mode_iterator VWCONVERTI [





















  (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")





















-  (RVVMF2SI "TARGET_ZVFH")





















+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")





















  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")





















@@ -1045,7 +1045,7 @@ (define_mode_iterator VWWCONVERTI [





















])





















(define_mode_iterator VQEXTI [





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")





















  (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")





















@@ -1456,11 +1456,11 @@ (define_mode_iterator VB [





















;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].





















(define_mode_iterator VINDEXED [





















-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")





















+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")





















+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")





















+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")





















  (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















  (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















@@ -1468,12 +1468,12 @@ (define_mode_iterator VINDEXED [





















  (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")





















  (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")





















-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")





















  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")





















@@ -3173,11 +3173,11 @@ (define_mode_attr v_f2si_convert [





















(define_mode_iterator V_VLS_F_CONVERT_SI [





















  (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")





















-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















@@ -3290,12 +3290,12 @@ (define_mode_attr V_F2DI_CONVERT_BRIDGE [





















])





















(define_mode_iterator V_VLS_F_CONVERT_DI [





















-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")





















-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")





















+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")





















  (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")





















  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")





















-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")





















  (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")





















  (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")





















diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md





















index 036b2425f32..9941651341d 100644





















--- a/gcc/config/riscv/vector.md





















+++ b/gcc/config/riscv/vector.md





















@@ -83,7 +83,7 @@ (define_attr "has_vl_op" "false,true"





















;; check. However, we need default value of SEW for vsetvl instruction since there





















;; is no field for ratio in the vsetvl instruction encoding.





















(define_attr "sew" ""





















-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\





















+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\





















  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\





















  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\





















  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\





















@@ -95,6 +95,18 @@ (define_attr "sew" ""





















  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\





















  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")





















(const_int 8)





















+ (eq_attr "mode" "RVVMF16BI")





















+    (if_then_else (match_test "TARGET_XTHEADVECTOR")





















+      (const_int 16)





















+      (const_int 8))





















+ (eq_attr "mode" "RVVMF32BI")





















+    (if_then_else (match_test "TARGET_XTHEADVECTOR")





















+      (const_int 32)





















+      (const_int 8))





















+ (eq_attr "mode" "RVVMF64BI")





















+    (if_then_else (match_test "TARGET_XTHEADVECTOR")





















+      (const_int 64)





















+      (const_int 8))





















(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\





















  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\





















  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\





















@@ -155,9 +167,9 @@ (define_attr "vlmul" ""





















(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")





















(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")





















(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")





















- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")





















- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")





















- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")





















+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")





















+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")





















+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")





















(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")





















(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")





















(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")





















@@ -428,6 +440,10 @@ (define_attr "ratio" ""





















  vislide1up,vislide1down,vfslide1up,vfslide1down,\





















  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")





















  (const_int INVALID_ATTRIBUTE)





















+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\





















+        vlsegdff,vssegtux,vlsegdox,vlsegdux")





















+       (match_test "TARGET_XTHEADVECTOR"))





















+    (const_int INVALID_ATTRIBUTE)





















(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)





















(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)





















(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)





















@@ -888,6 +904,8 @@ (define_attr "frm_mode" ""





















(symbol_ref "riscv_vector::FRM_DYN")]





















(symbol_ref "riscv_vector::FRM_NONE")))





















+(include "thead-vector.md")





















+





















;; -----------------------------------------------------------------





















;; ---- Miscellaneous Operations





















;; -----------------------------------------------------------------





















@@ -1097,7 +1115,7 @@ (define_expand "mov<mode>"





















(define_insn "*mov<mode>_whole"





















  [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")





















(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "@





















    vl%m1re<sew>.v\t%0,%1





















    vs%m1r.v\t%1,%0





















@@ -1125,7 +1143,7 @@ (define_expand "mov<mode>"





















(define_insn "*mov<mode>"





















  [(set (match_operand:VB 0 "register_operand" "=vr")





















(match_operand:VB 1 "register_operand" " vr"))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "vmv1r.v\t%0,%1"





















  [(set_attr "type" "vmov")





















    (set_attr "mode" "<MODE>")])





















@@ -3680,7 +3698,7 @@ (define_insn "@pred_<optab><mode>_vf2"





















  (any_extend:VWEXTI





















    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))





















  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "v<sz>ext.vf2\t%0,%3%p1"





















  [(set_attr "type" "vext")





















    (set_attr "mode" "<MODE>")





















@@ -3701,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf4"





















  (any_extend:VQEXTI





















    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))





















  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "v<sz>ext.vf4\t%0,%3%p1"





















  [(set_attr "type" "vext")





















    (set_attr "mode" "<MODE>")





















@@ -3722,7 +3740,7 @@ (define_insn "@pred_<optab><mode>_vf8"





















  (any_extend:VOEXTI





















    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))





















  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]





















-  "TARGET_VECTOR"





















+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"





















  "v<sz>ext.vf8\t%0,%3%p1"





















  [(set_attr "type" "vext")





















    (set_attr "mode" "<MODE>")





















diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c





















index 2e0e12aa045..2eef9e1e1a8 100644





















--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c





















+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c





















@@ -1,4 +1,4 @@





















-/* { dg-do compile } */





















+/* { dg-do compile { target { ! riscv_xtheadvector } } } */





















/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */





















void foo0 () {__rvv_bool64_t t;}





















diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c





















index 3d81b179235..ef329e30785 100644





















--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c





















+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c





















@@ -1,4 +1,4 @@





















/* { dg-do compile } */





















/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */





















-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */





















+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */





















diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp





















index 7f13ff0ca56..70df6b1401c 100644





















--- a/gcc/testsuite/lib/target-supports.exp





















+++ b/gcc/testsuite/lib/target-supports.exp





















@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {





















    }]





















}





















+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.





















+# Cache the result.





















+





















+proc check_effective_target_riscv_xtheadvector { } {





















+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {





















+       #ifndef __riscv_xtheadvector





















+       #error "Not __riscv_xtheadvector"





















+       #endif





















+    }]





















+}





















+





















+





















# Return 1 if we can execute code when using dg-add-options riscv_v





















proc check_effective_target_riscv_v_ok { } {





















--





















2.17.1































































^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc
  2023-12-20 12:25   ` [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc Jun Sha (Joshua)
@ 2023-12-20 18:14     ` Jeff Law
  2023-12-27  2:46       ` 回复:[PATCH " joshua
  2023-12-29  1:44       ` joshua
  0 siblings, 2 replies; 69+ messages in thread
From: Jeff Law @ 2023-12-20 18:14 UTC (permalink / raw)
  To: Jun Sha (Joshua), gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong, Jin Ma, Xianmiao Qu



On 12/20/23 05:25, Jun Sha (Joshua) wrote:
> This patch moves the definition of the enums lst_type and
> frm_op_type into riscv-vector-builtins-bases.h and removes
> the static visibility of fold_fault_load(), so these
> can be used in other compile units.
> 
> gcc/ChangeLog:
> 
> 	* config/riscv/riscv-vector-builtins-bases.cc (enum lst_type):
> 	(enum frm_op_type): move to riscv-vector-builtins-bases.h
> 	* config/riscv/riscv-vector-builtins-bases.h
> 	(GCC_RISCV_VECTOR_BUILTINS_BASES_H): Add header files.
> 	(enum lst_type): move from
> 	(enum frm_op_type): riscv-vector-builtins-bases.cc
> 	(fold_fault_load): riscv-vector-builtins-bases.cc
I'm largely hoping to leave the heavy review lifting here to Juzhe who 
knows GCC's RV vector bits as well as anyone.

Just one small issue.  Would it be better to prototype fold_fault_load 
elsewhere and avoid the gimple.h inclusion in 
riscv-vector-builtins-bases.h?  Perhaps riscv-protos.h?

You might consider prefixing the function name with riscv_.  It's not 
strictly necessary, but it appears to be relatively common in risc-v port.

Thanks,
Jeff


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns.
  2023-12-20 12:27   ` [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns Jun Sha (Joshua)
@ 2023-12-20 18:16     ` Jeff Law
  2023-12-27  2:49       ` 回复:[PATCH " joshua
  0 siblings, 1 reply; 69+ messages in thread
From: Jeff Law @ 2023-12-20 18:16 UTC (permalink / raw)
  To: Jun Sha (Joshua), gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong, Jin Ma, Xianmiao Qu



On 12/20/23 05:27, Jun Sha (Joshua) wrote:
> This patch splits the definition of csr_operand in predicates.md.
> The newly defined vector_csr_operand has the same functionality
> as csr_operand but can only be used in vector patterns, so that
> changes for vector will not affect scalar patterns in files
> like riscv.md.
> 
> gcc/ChangeLog:
> 
> 	* config/riscv/predicates.md (vector_csr_operand):
> 	Define vector_csr_opeand for vector.
> 	* config/riscv/vector.md:
> 	Use newly defined csr_operand for vector.
So do you envision changing something in vector_csr_operand?  If not, 
then this doesn't make much sense.

Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-20 12:32   ` [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector Jun Sha (Joshua)
@ 2023-12-20 18:22     ` Jeff Law
  2023-12-20 22:48       ` 钟居哲
  2023-12-25  6:25     ` [PATCH v4 " Jun Sha (Joshua)
  1 sibling, 1 reply; 69+ messages in thread
From: Jeff Law @ 2023-12-20 18:22 UTC (permalink / raw)
  To: Jun Sha (Joshua), gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong, Jin Ma, Xianmiao Qu



On 12/20/23 05:32, Jun Sha (Joshua) wrote:
> This patch adds th. prefix to all XTheadVector instructions by
> implementing new assembly output functions.
> 
> gcc/ChangeLog:
> 
> 	* config/riscv/riscv-protos.h
> 	(riscv_asm_output_opcode): New function.
> 	* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
> 	* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.
> 
> Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
> Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
> Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
> ---
>   gcc/config/riscv/riscv-protos.h               |  1 +
>   gcc/config/riscv/riscv.cc                     | 26 +++++++++++++++++++
>   gcc/config/riscv/riscv.h                      |  4 +++
>   .../riscv/rvv/xtheadvector/prefix.c           | 12 +++++++++
>   4 files changed, 43 insertions(+)
>   create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
> 
> diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
> index eaee53ce94e..f0eee71a18a 100644
> --- a/gcc/config/riscv/riscv-protos.h
> +++ b/gcc/config/riscv/riscv-protos.h
> @@ -101,6 +101,7 @@ struct riscv_address_info {
>   };
>   
>   /* Routines implemented in riscv.cc.  */
> +extern void riscv_asm_output_opcode(FILE *asm_out_file, const char *p);
>   extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
>   extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
>   extern int riscv_float_const_rtx_index_for_fli (rtx);
> diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
> index 8ae65760b6e..d3010bed8d8 100644
> --- a/gcc/config/riscv/riscv.cc
> +++ b/gcc/config/riscv/riscv.cc
> @@ -5595,6 +5595,32 @@ riscv_get_v_regno_alignment (machine_mode mode)
>     return lmul;
>   }
>   
> +void
> +riscv_asm_output_opcode(FILE *asm_out_file, const char *p)
Needs a function comment.  There's several examples in this file you can 
use to see the style we commonly use.  And a minor formatting nit, 
always put a space between a function name and an open paren.


> +{
> +  if (!TARGET_XTHEADVECTOR)
> +    return;
> +
> +  if (current_output_insn == NULL_RTX)
> +    return;
> +
> +  /* We need to handle the 'vset' special case here since it cannot
> +     be controlled by vector mode. */
> +  if (!strncmp (p, "vset", 4))
> +    {
> +      fputs ("th.", asm_out_file);
> +      return;
> +    }
> +
> +  subrtx_iterator::array_type array;
> +  FOR_EACH_SUBRTX (iter, array, PATTERN (current_output_insn), ALL)
> +    if (*iter && riscv_v_ext_mode_p (GET_MODE (*iter)) && p[0] == 'v')
> +      {
> +	fputs ("th.", asm_out_file);
> +	return;
> +      }
> +}
So rather than looking at the mode, would it make more sense to have an 
attribute (or re-use an existing attribute) to identify which opcodes 
are going to need prefixing?  We've got access to the INSN via 
current_output_insn.  So we can lookup attributes trivially.

This is a question, not a demand -- I'm looking for a solution that's 
going to be reliable with minimal effort going forward.

jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: Re: [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-20 18:22     ` Jeff Law
@ 2023-12-20 22:48       ` 钟居哲
  2023-12-21  4:41         ` Jeff Law
  0 siblings, 1 reply; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 22:48 UTC (permalink / raw)
  To: Jeff Law, cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	Christoph Müllner, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 4060 bytes --]

>> So rather than looking at the mode, would it make more sense to have an
>> attribute (or re-use an existing attribute) to identify which opcodes
>> are going to need prefixing?  We've got access to the INSN via
>> current_output_insn.  So we can lookup attributes trivially.

Yes, I totally aggree with Jeff's idea. We have addes many attributes for each RVV instructions.
For example, VSETVL PASS is highly depending on those attribute to do the optimizations.

Btw, I have review the full patch and I am gonna give more comprehensive comments in cover letter.



juzhe.zhong@rivai.ai
 
From: Jeff Law
Date: 2023-12-21 02:22
To: Jun Sha (Joshua); gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; christoph.muellner; juzhe.zhong; Jin Ma; Xianmiao Qu
Subject: Re: [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
 
 
On 12/20/23 05:32, Jun Sha (Joshua) wrote:
> This patch adds th. prefix to all XTheadVector instructions by
> implementing new assembly output functions.
> 
> gcc/ChangeLog:
> 
> * config/riscv/riscv-protos.h
> (riscv_asm_output_opcode): New function.
> * config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
> * config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.
> 
> Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
> Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
> Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
> ---
>   gcc/config/riscv/riscv-protos.h               |  1 +
>   gcc/config/riscv/riscv.cc                     | 26 +++++++++++++++++++
>   gcc/config/riscv/riscv.h                      |  4 +++
>   .../riscv/rvv/xtheadvector/prefix.c           | 12 +++++++++
>   4 files changed, 43 insertions(+)
>   create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
> 
> diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
> index eaee53ce94e..f0eee71a18a 100644
> --- a/gcc/config/riscv/riscv-protos.h
> +++ b/gcc/config/riscv/riscv-protos.h
> @@ -101,6 +101,7 @@ struct riscv_address_info {
>   };
>   
>   /* Routines implemented in riscv.cc.  */
> +extern void riscv_asm_output_opcode(FILE *asm_out_file, const char *p);
>   extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
>   extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
>   extern int riscv_float_const_rtx_index_for_fli (rtx);
> diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
> index 8ae65760b6e..d3010bed8d8 100644
> --- a/gcc/config/riscv/riscv.cc
> +++ b/gcc/config/riscv/riscv.cc
> @@ -5595,6 +5595,32 @@ riscv_get_v_regno_alignment (machine_mode mode)
>     return lmul;
>   }
>   
> +void
> +riscv_asm_output_opcode(FILE *asm_out_file, const char *p)
Needs a function comment.  There's several examples in this file you can 
use to see the style we commonly use.  And a minor formatting nit, 
always put a space between a function name and an open paren.
 
 
> +{
> +  if (!TARGET_XTHEADVECTOR)
> +    return;
> +
> +  if (current_output_insn == NULL_RTX)
> +    return;
> +
> +  /* We need to handle the 'vset' special case here since it cannot
> +     be controlled by vector mode. */
> +  if (!strncmp (p, "vset", 4))
> +    {
> +      fputs ("th.", asm_out_file);
> +      return;
> +    }
> +
> +  subrtx_iterator::array_type array;
> +  FOR_EACH_SUBRTX (iter, array, PATTERN (current_output_insn), ALL)
> +    if (*iter && riscv_v_ext_mode_p (GET_MODE (*iter)) && p[0] == 'v')
> +      {
> + fputs ("th.", asm_out_file);
> + return;
> +      }
> +}
So rather than looking at the mode, would it make more sense to have an 
attribute (or re-use an existing attribute) to identify which opcodes 
are going to need prefixing?  We've got access to the INSN via 
current_output_insn.  So we can lookup attributes trivially.
 
This is a question, not a demand -- I'm looking for a solution that's 
going to be reliable with minimal effort going forward.
 
jeff
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
                     ` (5 preceding siblings ...)
  2023-12-20 12:36   ` [PATCH v3 6/6] RISC-V: Add support for xtheadvector-specific intrinsics Jun Sha (Joshua)
@ 2023-12-20 23:04   ` 钟居哲
  2023-12-22  3:33     ` 回复:[PATCH " joshua
  2023-12-20 23:08   ` [PATCH " 钟居哲
  7 siblings, 1 reply; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 23:04 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, cooper.joshua, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 6631 bytes --]

Hi, Joshua.

Thanks for working hard on clean up codes and support tons of work on theadvector.

After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.

1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.

Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)

So, I hope you can break this big patch into 3 different series patches.

1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.

I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.

For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")






juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-20 20:20
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
 
[1] https://github.com/T-head-Semi/thead-extension-spec/
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
 
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
 
We have run the GCC test suite and can confirm that there
are no regressions.
 
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html
 
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html
 
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples, 
and all the tests passed.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
 
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
 
---
gcc/common/config/riscv/riscv-common.cc       |   23 +
gcc/config.gcc                                |    4 +-
gcc/config/riscv/autovec.md                   |    2 +-
gcc/config/riscv/predicates.md                |    8 +-
gcc/config/riscv/riscv-c.cc                   |    8 +-
gcc/config/riscv/riscv-protos.h               |    1 +
gcc/config/riscv/riscv-string.cc              |    3 +
gcc/config/riscv/riscv-v.cc                   |   13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   18 +-
.../riscv/riscv-vector-builtins-bases.h       |   19 +
.../riscv/riscv-vector-builtins-shapes.cc     |  149 +
.../riscv/riscv-vector-builtins-shapes.h      |    3 +
.../riscv/riscv-vector-builtins-types.def     |  120 +
gcc/config/riscv/riscv-vector-builtins.cc     |  315 +-
gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
gcc/config/riscv/riscv-vector-switch.def      |  150 +-
gcc/config/riscv/riscv.cc                     |   46 +-
gcc/config/riscv/riscv.h                      |    4 +
gcc/config/riscv/riscv.opt                    |    2 +
gcc/config/riscv/riscv_th_vector.h            |   49 +
gcc/config/riscv/t-riscv                      |   16 +
.../riscv/thead-vector-builtins-functions.def |  659 ++++
gcc/config/riscv/thead-vector-builtins.cc     |  887 ++++++
gcc/config/riscv/thead-vector-builtins.h      |  123 +
gcc/config/riscv/thead-vector.md              | 2827 +++++++++++++++++
gcc/config/riscv/vector-iterators.md          |  186 +-
gcc/config/riscv/vector.md                    |   44 +-
.../riscv/predef-__riscv_th_v_intrinsic.c     |   11 +
.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
.../gcc.target/riscv/rvv/xtheadvector.c       |   13 +
.../riscv/rvv/xtheadvector/prefix.c           |   12 +
.../riscv/rvv/xtheadvector/vlb-vsb.c          |   68 +
.../riscv/rvv/xtheadvector/vlbu-vsb.c         |   68 +
.../riscv/rvv/xtheadvector/vlh-vsh.c          |   68 +
.../riscv/rvv/xtheadvector/vlhu-vsh.c         |   68 +
.../riscv/rvv/xtheadvector/vlw-vsw.c          |   68 +
.../riscv/rvv/xtheadvector/vlwu-vsw.c         |   68 +
gcc/testsuite/lib/target-supports.exp         |   12 +
39 files changed, 5931 insertions(+), 213 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
                     ` (6 preceding siblings ...)
  2023-12-20 23:04   ` [PATCH v3 0/6] RISC-V: Support XTheadVector extension 钟居哲
@ 2023-12-20 23:08   ` 钟居哲
  2023-12-21  3:28     ` Jeff Law
  7 siblings, 1 reply; 69+ messages in thread
From: 钟居哲 @ 2023-12-20 23:08 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, cooper.joshua, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 10430 bytes --]

Btw, rv32/rv64gc or rv32/rv64 gcv testing is not enough.

We need full coverage testing, since we always commit patch after no regression testing on full coverage testing:

with these following configurations:

-march=rv[32/64]gc_zve32f_zvfh_zfh
-march=rv[32/64]gc_zve64d_zvfh_zfh
-march=rv[32/64]gcv_zvfh_zfh
-march=rv[32/64]gcv_zvl256b_zvfh_zfh
-march=rv[32/64]gcv_zvl512b_zvfh_zfh
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh

-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=m2
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=m4
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=m8
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=dynamic
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=m2
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=m4
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=m8
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=dynamic
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=m2
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=m4
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=m8
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=dynamic
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=m2
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=m4
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=m8
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=dynamic
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=m2
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=m4
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=m8
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=dynamic
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=m2
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=m4
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=m8
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=dynamic
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=m2 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=m4 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=m8 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve32f_zvfh_zfh --param=riscv-autovec-lmul=dynamic --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=m2 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=m4 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=m8 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gc_zve64d_zvfh_zfh --param=riscv-autovec-lmul=dynamic --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=m2 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=m4 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=m8 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvfh_zfh --param=riscv-autovec-lmul=dynamic --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=m2 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=m4 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=m8 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl256b_zvfh_zfh --param=riscv-autovec-lmul=dynamic --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=m2 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=m4 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=m8 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl512b_zvfh_zfh --param=riscv-autovec-lmul=dynamic --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=m2 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=m4 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=m8 --param=riscv-autovec-preference=fixed-vlmax
-march=rv[32/64]gcv_zvl1024b_zvfh_zfh --param=riscv-autovec-lmul=dynamic --param=riscv-autovec-preference=fixed-vlmax

You can learn more how to run these testing with email to pan2.li@intel.com




juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-20 20:20
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
 
[1] https://github.com/T-head-Semi/thead-extension-spec/
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
 
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
 
We have run the GCC test suite and can confirm that there
are no regressions.
 
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html
 
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html
 
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples, 
and all the tests passed.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
 
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
 
---
gcc/common/config/riscv/riscv-common.cc       |   23 +
gcc/config.gcc                                |    4 +-
gcc/config/riscv/autovec.md                   |    2 +-
gcc/config/riscv/predicates.md                |    8 +-
gcc/config/riscv/riscv-c.cc                   |    8 +-
gcc/config/riscv/riscv-protos.h               |    1 +
gcc/config/riscv/riscv-string.cc              |    3 +
gcc/config/riscv/riscv-v.cc                   |   13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   18 +-
.../riscv/riscv-vector-builtins-bases.h       |   19 +
.../riscv/riscv-vector-builtins-shapes.cc     |  149 +
.../riscv/riscv-vector-builtins-shapes.h      |    3 +
.../riscv/riscv-vector-builtins-types.def     |  120 +
gcc/config/riscv/riscv-vector-builtins.cc     |  315 +-
gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
gcc/config/riscv/riscv-vector-switch.def      |  150 +-
gcc/config/riscv/riscv.cc                     |   46 +-
gcc/config/riscv/riscv.h                      |    4 +
gcc/config/riscv/riscv.opt                    |    2 +
gcc/config/riscv/riscv_th_vector.h            |   49 +
gcc/config/riscv/t-riscv                      |   16 +
.../riscv/thead-vector-builtins-functions.def |  659 ++++
gcc/config/riscv/thead-vector-builtins.cc     |  887 ++++++
gcc/config/riscv/thead-vector-builtins.h      |  123 +
gcc/config/riscv/thead-vector.md              | 2827 +++++++++++++++++
gcc/config/riscv/vector-iterators.md          |  186 +-
gcc/config/riscv/vector.md                    |   44 +-
.../riscv/predef-__riscv_th_v_intrinsic.c     |   11 +
.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
.../gcc.target/riscv/rvv/xtheadvector.c       |   13 +
.../riscv/rvv/xtheadvector/prefix.c           |   12 +
.../riscv/rvv/xtheadvector/vlb-vsb.c          |   68 +
.../riscv/rvv/xtheadvector/vlbu-vsb.c         |   68 +
.../riscv/rvv/xtheadvector/vlh-vsh.c          |   68 +
.../riscv/rvv/xtheadvector/vlhu-vsh.c         |   68 +
.../riscv/rvv/xtheadvector/vlw-vsw.c          |   68 +
.../riscv/rvv/xtheadvector/vlwu-vsw.c         |   68 +
gcc/testsuite/lib/target-supports.exp         |   12 +
39 files changed, 5931 insertions(+), 213 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-20 23:08   ` [PATCH " 钟居哲
@ 2023-12-21  3:28     ` Jeff Law
  2023-12-21  3:30       ` juzhe.zhong
  0 siblings, 1 reply; 69+ messages in thread
From: Jeff Law @ 2023-12-21  3:28 UTC (permalink / raw)
  To: 钟居哲, cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	Christoph Müllner, jinma, Cooper Qu



On 12/20/23 16:08, 钟居哲 wrote:
> Btw, rv32/rv64gc or rv32/rv64 gcv testing is not enough.
> 
> We need full coverage testing, since we always commit patch after no 
> regression testing on full coverage testing:
No.  It is unreasonable to require this large of test matrix for the 
vast majority if contributions.

thead should be able to pick a reasonable set that provides a degree of 
coverage, but they don't need to test all those configurations.

jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-21  3:28     ` Jeff Law
@ 2023-12-21  3:30       ` juzhe.zhong
  2023-12-21  4:04         ` Jeff Law
  0 siblings, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-21  3:30 UTC (permalink / raw)
  To: jeffreyalaw, cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, christoph.muellner,
	jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 859 bytes --]

OK.  Sounds reasonable.

But from my side, I used to commit patches after full coverage testing.



juzhe.zhong@rivai.ai
 
From: Jeff Law
Date: 2023-12-21 11:28
To: 钟居哲; cooper.joshua; gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Christoph Müllner; jinma; Cooper Qu
Subject: Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
 
 
On 12/20/23 16:08, 钟居哲 wrote:
> Btw, rv32/rv64gc or rv32/rv64 gcv testing is not enough.
> 
> We need full coverage testing, since we always commit patch after no 
> regression testing on full coverage testing:
No.  It is unreasonable to require this large of test matrix for the 
vast majority if contributions.
 
thead should be able to pick a reasonable set that provides a degree of 
coverage, but they don't need to test all those configurations.
 
jeff
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-21  3:30       ` juzhe.zhong
@ 2023-12-21  4:04         ` Jeff Law
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff Law @ 2023-12-21  4:04 UTC (permalink / raw)
  To: juzhe.zhong, cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, christoph.muellner,
	jinma, cooper.qu



On 12/20/23 20:30, juzhe.zhong@rivai.ai wrote:
> OK.  Sounds reasonable.
> 
> But from my side, I used to commit patches after full coverage testing.
Understood.  And it's appreciated -- the code you're doing hits a wide 
variety of configurations, so the wider testing is probably applicable.

Ideally the thead vector bits need reasonable testing to make sure they 
don't totally break the standard RVV support.  So for something like the 
final scheme to add the "th." prefix I'd expect they can get away with 
just rv64gcv.  WHile there is a chance that'll miss something, the odds 
are pretty low that a bug will be uncovered for each additional 
configuration tested beyond the first.

In contrast if they needed to make a structural changes that are more 
than adding a path for thead's vector unit, then we might reasonably ask 
for a deeper test of that specific patch (perhaps even suggesting the 
configurations most likely affected and thus which need to be tested).

The key being we want to use time wisely and testing dozens of multilibs 
for each change isn't really reasonable.

It's always a delicate balance to articulate the right level of testing 
because the "right" level can vary based on each engineer's risk 
assessment of a particular change.

Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-20 22:48       ` 钟居哲
@ 2023-12-21  4:41         ` Jeff Law
  2023-12-21  9:43           ` Kito Cheng
  0 siblings, 1 reply; 69+ messages in thread
From: Jeff Law @ 2023-12-21  4:41 UTC (permalink / raw)
  To: 钟居哲, cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	Christoph Müllner, jinma, Cooper Qu



On 12/20/23 15:48, 钟居哲 wrote:
>  >> So rather than looking at the mode, would it make more sense to have an
>>>attribute (or re-use an existing attribute) to identify which opcodes
>>>are going to need prefixing?  We've got access to the INSN via
>>>current_output_insn.  So we can lookup attributes trivially.
> 
> Yes, I totally aggree with Jeff's idea. We have addes many attributes 
> for each RVV instructions.
> For example, VSETVL PASS is highly depending on those attribute to do 
> the optimizations.
Also note that with attributes, we can potentially even deal with cases 
where some alternatives need special handling while other alternatives 
simply aren't available with the thead extension.  Not sure if that's 
going to be needed or not, but it's worth remembering.

Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-21  4:41         ` Jeff Law
@ 2023-12-21  9:43           ` Kito Cheng
  0 siblings, 0 replies; 69+ messages in thread
From: Kito Cheng @ 2023-12-21  9:43 UTC (permalink / raw)
  To: Jeff Law
  Cc: 钟居哲,
	cooper.joshua, gcc-patches, jim.wilson.gcc, palmer, andrew,
	philipp.tomsich, Christoph Müllner, jinma, Cooper Qu

Why not just check the prefix is 'v'? I don't think xtheadvector able
to work with other vector stuffs like vector crypto or any other new
vector stuffs, then we don't need extra attribute.

On Thu, Dec 21, 2023 at 12:42 PM Jeff Law <jeffreyalaw@gmail.com> wrote:
>
>
>
> On 12/20/23 15:48, 钟居哲 wrote:
> >  >> So rather than looking at the mode, would it make more sense to have an
> >>>attribute (or re-use an existing attribute) to identify which opcodes
> >>>are going to need prefixing?  We've got access to the INSN via
> >>>current_output_insn.  So we can lookup attributes trivially.
> >
> > Yes, I totally aggree with Jeff's idea. We have addes many attributes
> > for each RVV instructions.
> > For example, VSETVL PASS is highly depending on those attribute to do
> > the optimizations.
> Also note that with attributes, we can potentially even deal with cases
> where some alternatives need special handling while other alternatives
> simply aren't available with the thead extension.  Not sure if that's
> going to be needed or not, but it's worth remembering.
>
> Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-20 23:04   ` [PATCH v3 0/6] RISC-V: Support XTheadVector extension 钟居哲
@ 2023-12-22  3:33     ` joshua
  2023-12-22  8:07       ` juzhe.zhong
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-22  3:33 UTC (permalink / raw)
  To: 钟居哲, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner, jinma, Cooper Qu

[-- Attachment #1: Type: text/plain, Size: 8397 bytes --]

Hi Juzhe,
Thank you for your comprehensive comments.
Classifying theadvector intrinsics into 3 kinds is really important to make our patchset more organized. 
For 1) and 3), I will split out the patches soon and hope they will be merged quickly.
For 2), according to the differences between vector and xtheadvector, it can be classfied into 3 kinds.
First is renamed load/store, renamed narrowing integer right shift, renamed narrowing fixed-point clip, and etc. I think we can use ASM targethook to rewrite the whole string of the instructions, although it will still be a heavy work.
Second is no pseudo instruction like vneg/vfneg. We will add these pseudo instructions in binutils to make xtheadvector more compatible with vector.
Third is that destination vector register cannot overlap source vector register group for vmadc/vmsbc/widen arithmetic/narrow arithmetic. Currently I cannot come up with any better way than pattern copy. Do you have any suggestions?
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月21日(星期四) 07:04
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi, Joshua.
Thanks for working hard on clean up codes and support tons of work on theadvector.
After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.
1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.
Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)
So, I hope you can break this big patch into 3 different series patches.
1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.
I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.
For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >
Date: 2023-12-20 20:20
To: gcc-patches <mailto:gcc-patches@gcc.gnu.org >
CC: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; jeffreyalaw <mailto:jeffreyalaw@gmail.com >; christoph.muellner <mailto:christoph.muellner@vrull.eu >; juzhe.zhong <mailto:juzhe.zhong@rivai.ai >; Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >; Jin Ma <mailto:jinma@linux.alibaba.com >; Xianmiao Qu <mailto:cooper.qu@linux.alibaba.com >
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
[1] https://github.com/T-head-Semi/thead-extension-spec/ <https://github.com/T-head-Semi/thead-extension-spec/ >
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
We have run the GCC test suite and can confirm that there
are no regressions.
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html <https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html >
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html <https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html >
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples <https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples >, 
and all the tests passed.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
---
 gcc/common/config/riscv/riscv-common.cc | 23 +
 gcc/config.gcc | 4 +-
 gcc/config/riscv/autovec.md | 2 +-
 gcc/config/riscv/predicates.md | 8 +-
 gcc/config/riscv/riscv-c.cc | 8 +-
 gcc/config/riscv/riscv-protos.h | 1 +
 gcc/config/riscv/riscv-string.cc | 3 +
 gcc/config/riscv/riscv-v.cc | 13 +-
 .../riscv/riscv-vector-builtins-bases.cc | 18 +-
 .../riscv/riscv-vector-builtins-bases.h | 19 +
 .../riscv/riscv-vector-builtins-shapes.cc | 149 +
 .../riscv/riscv-vector-builtins-shapes.h | 3 +
 .../riscv/riscv-vector-builtins-types.def | 120 +
 gcc/config/riscv/riscv-vector-builtins.cc | 315 +-
 gcc/config/riscv/riscv-vector-builtins.h | 5 +-
 gcc/config/riscv/riscv-vector-switch.def | 150 +-
 gcc/config/riscv/riscv.cc | 46 +-
 gcc/config/riscv/riscv.h | 4 +
 gcc/config/riscv/riscv.opt | 2 +
 gcc/config/riscv/riscv_th_vector.h | 49 +
 gcc/config/riscv/t-riscv | 16 +
 .../riscv/thead-vector-builtins-functions.def | 659 ++++
 gcc/config/riscv/thead-vector-builtins.cc | 887 ++++++
 gcc/config/riscv/thead-vector-builtins.h | 123 +
 gcc/config/riscv/thead-vector.md | 2827 +++++++++++++++++
 gcc/config/riscv/vector-iterators.md | 186 +-
 gcc/config/riscv/vector.md | 44 +-
 .../riscv/predef-__riscv_th_v_intrinsic.c | 11 +
 .../gcc.target/riscv/rvv/base/abi-1.c | 2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
 .../gcc.target/riscv/rvv/xtheadvector.c | 13 +
 .../riscv/rvv/xtheadvector/prefix.c | 12 +
 .../riscv/rvv/xtheadvector/vlb-vsb.c | 68 +
 .../riscv/rvv/xtheadvector/vlbu-vsb.c | 68 +
 .../riscv/rvv/xtheadvector/vlh-vsh.c | 68 +
 .../riscv/rvv/xtheadvector/vlhu-vsh.c | 68 +
 .../riscv/rvv/xtheadvector/vlw-vsw.c | 68 +
 .../riscv/rvv/xtheadvector/vlwu-vsw.c | 68 +
 gcc/testsuite/lib/target-supports.exp | 12 +
 39 files changed, 5931 insertions(+), 213 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-22  3:33     ` 回复:[PATCH " joshua
@ 2023-12-22  8:07       ` juzhe.zhong
  2023-12-22 10:29         ` 回复:回复:[PATCH " joshua
  2023-12-22 17:21         ` Jeff Law
  0 siblings, 2 replies; 69+ messages in thread
From: juzhe.zhong @ 2023-12-22  8:07 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 9324 bytes --]

You mean theadvector doesn't want the current RVV1.0 register overlap magic  as follows ?
The destination EEW is smaller than the source EEW and the overlap is in the lowest-numbered part of the source register group (e.g., when LMUL=1, vnsrl.wi v0, v0, 3 is legal, but a destination of v1 is not).
The destination EEW is greater than the source EEW, the source EMUL is at least 1, and the overlap is in the highest-numbered part of the destination register group (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not).

If yes, I suggest disable the overlap constraint using attribute, More details you can learn from 

(set_attr "group_overlap"


juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-22 11:33
收件人: 钟居哲; gcc-patches
抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu
主题: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,

Thank you for your comprehensive comments.

Classifying theadvector intrinsics into 3 kinds is really important to make our patchset more organized. 

For 1) and 3), I will split out the patches soon and hope they will be merged quickly.
For 2), according to the differences between vector and xtheadvector, it can be classfied into 3 kinds.

First is renamed load/store, renamed narrowing integer right shift, renamed narrowing fixed-point clip, and etc. I think we can use ASM targethook to rewrite the whole string of the instructions, although it will still be a heavy work.
Second is no pseudo instruction like vneg/vfneg. We will add these pseudo instructions in binutils to make xtheadvector more compatible with vector.
Third is that destination vector register cannot overlap source vector register group for vmadc/vmsbc/widen arithmetic/narrow arithmetic. Currently I cannot come up with any better way than pattern copy.  Do you have any suggestions?

Joshua




------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月21日(星期四) 07:04
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension

Hi, Joshua.

Thanks for working hard on clean up codes and support tons of work on theadvector.

After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.

1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.

Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)

So, I hope you can break this big patch into 3 different series patches.

1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.

I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.

For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")






juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-20 20:20
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
 
[1] https://github.com/T-head-Semi/thead-extension-spec/
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
 
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
 
We have run the GCC test suite and can confirm that there
are no regressions.
 
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html
 
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html
 
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples, 
and all the tests passed.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
 
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
 
---
gcc/common/config/riscv/riscv-common.cc       |   23 +
gcc/config.gcc                                |    4 +-
gcc/config/riscv/autovec.md                   |    2 +-
gcc/config/riscv/predicates.md                |    8 +-
gcc/config/riscv/riscv-c.cc                   |    8 +-
gcc/config/riscv/riscv-protos.h               |    1 +
gcc/config/riscv/riscv-string.cc              |    3 +
gcc/config/riscv/riscv-v.cc                   |   13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   18 +-
.../riscv/riscv-vector-builtins-bases.h       |   19 +
.../riscv/riscv-vector-builtins-shapes.cc     |  149 +
.../riscv/riscv-vector-builtins-shapes.h      |    3 +
.../riscv/riscv-vector-builtins-types.def     |  120 +
gcc/config/riscv/riscv-vector-builtins.cc     |  315 +-
gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
gcc/config/riscv/riscv-vector-switch.def      |  150 +-
gcc/config/riscv/riscv.cc                     |   46 +-
gcc/config/riscv/riscv.h                      |    4 +
gcc/config/riscv/riscv.opt                    |    2 +
gcc/config/riscv/riscv_th_vector.h            |   49 +
gcc/config/riscv/t-riscv                      |   16 +
.../riscv/thead-vector-builtins-functions.def |  659 ++++
gcc/config/riscv/thead-vector-builtins.cc     |  887 ++++++
gcc/config/riscv/thead-vector-builtins.h      |  123 +
gcc/config/riscv/thead-vector.md              | 2827 +++++++++++++++++
gcc/config/riscv/vector-iterators.md          |  186 +-
gcc/config/riscv/vector.md                    |   44 +-
.../riscv/predef-__riscv_th_v_intrinsic.c     |   11 +
.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
.../gcc.target/riscv/rvv/xtheadvector.c       |   13 +
.../riscv/rvv/xtheadvector/prefix.c           |   12 +
.../riscv/rvv/xtheadvector/vlb-vsb.c          |   68 +
.../riscv/rvv/xtheadvector/vlbu-vsb.c         |   68 +
.../riscv/rvv/xtheadvector/vlh-vsh.c          |   68 +
.../riscv/rvv/xtheadvector/vlhu-vsh.c         |   68 +
.../riscv/rvv/xtheadvector/vlw-vsw.c          |   68 +
.../riscv/rvv/xtheadvector/vlwu-vsw.c         |   68 +
gcc/testsuite/lib/target-supports.exp         |   12 +
39 files changed, 5931 insertions(+), 213 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
 


^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-22  8:07       ` juzhe.zhong
@ 2023-12-22 10:29         ` joshua
  2023-12-22 10:31           ` 回复:[PATCH " juzhe.zhong
  2023-12-22 17:21         ` Jeff Law
  1 sibling, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-22 10:29 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 11369 bytes --]

Hi Juzhe,
What xtheadvector needs to handle is just that destination vector register cannot overlap source vector register group for instructions like vmadc/vmsbc. That is not what group_overlap means. We nned to add "&" to the registers in the corresponding xtheadvector patterns while rvv 1.0 doesn't have this constraint.
(define_insn "@pred_th_msbc<mode>"
 [(set (match_operand:<VM> 0 "register_operand" "=&vr")
 (unspec:<VM>
 [(minus:VI
 (match_operand:VI 1 "register_operand" " vr")
 (match_operand:VI 2 "register_operand" " vr"))
 (match_operand:<VM> 3 "register_operand" " vm")
 (unspec:<VM>
 [(match_operand 4 "vector_length_operand" " rK")
 (match_operand 5 "const_int_operand" " i")
 (reg:SI VL_REGNUM)
 (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
 "TARGET_XTHEADVECTOR"
 "vmsbc.vvm\t%0,%1,%2,%3"
 [(set_attr "type" "vicalu")
 (set_attr "mode" "<MODE>")
 (set_attr "vl_op_idx" "4")
 (set (attr "avl_type_idx") (const_int 5))])
Joshua
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月22日(星期五) 16:07
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
You mean theadvector doesn't want the current RVV1.0 register overlap magic as follows ?

 * 
The destination EEW is smaller than the source EEW and the overlap is in the lowest-numbered part of the source register group (e.g., when LMUL=1, vnsrl.wi v0, v0, 3 is legal, but a destination of v1 is not).

 * 
The destination EEW is greater than the source EEW, the source EMUL is at least 1, and the overlap is in the highest-numbered part of the destination register group (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not).
If yes, I suggest disable the overlap constraint using attribute, More details you can learn from 
(set_attr "group_overlap"
juzhe.zhong@rivai.ai
发件人: joshua <mailto:cooper.joshua@linux.alibaba.com >
发送时间: 2023-12-22 11:33
收件人: 钟居哲 <mailto:juzhe.zhong@rivai.ai >; gcc-patches <mailto:gcc-patches@gcc.gnu.org >
抄送: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; Jeff Law <mailto:jeffreyalaw@gmail.com >; Christoph Müllner <mailto:christoph.muellner@vrull.eu >; jinma <mailto:jinma@linux.alibaba.com >; Cooper Qu <mailto:cooper.qu@linux.alibaba.com >
主题: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,
Thank you for your comprehensive comments.
Classifying theadvector intrinsics into 3 kinds is really important to make our patchset more organized. 
For 1) and 3), I will split out the patches soon and hope they will be merged quickly.
For 2), according to the differences between vector and xtheadvector, it can be classfied into 3 kinds.
First is renamed load/store, renamed narrowing integer right shift, renamed narrowing fixed-point clip, and etc. I think we can use ASM targethook to rewrite the whole string of the instructions, although it will still be a heavy work.
Second is no pseudo instruction like vneg/vfneg. We will add these pseudo instructions in binutils to make xtheadvector more compatible with vector.
Third is that destination vector register cannot overlap source vector register group for vmadc/vmsbc/widen arithmetic/narrow arithmetic. Currently I cannot come up with any better way than pattern copy. Do you have any suggestions?
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月21日(星期四) 07:04
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi, Joshua.
Thanks for working hard on clean up codes and support tons of work on theadvector.
After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.
1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.
Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)
So, I hope you can break this big patch into 3 different series patches.
1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.
I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.
For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >
Date: 2023-12-20 20:20
To: gcc-patches <mailto:gcc-patches@gcc.gnu.org >
CC: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; jeffreyalaw <mailto:jeffreyalaw@gmail.com >; christoph.muellner <mailto:christoph.muellner@vrull.eu >; juzhe.zhong <mailto:juzhe.zhong@rivai.ai >; Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >; Jin Ma <mailto:jinma@linux.alibaba.com >; Xianmiao Qu <mailto:cooper.qu@linux.alibaba.com >
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
[1] https://github.com/T-head-Semi/thead-extension-spec/ <https://github.com/T-head-Semi/thead-extension-spec/ >
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
We have run the GCC test suite and can confirm that there
are no regressions.
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html <https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html >
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html <https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html >
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples <https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples >, 
and all the tests passed.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
---
 gcc/common/config/riscv/riscv-common.cc | 23 +
 gcc/config.gcc | 4 +-
 gcc/config/riscv/autovec.md | 2 +-
 gcc/config/riscv/predicates.md | 8 +-
 gcc/config/riscv/riscv-c.cc | 8 +-
 gcc/config/riscv/riscv-protos.h | 1 +
 gcc/config/riscv/riscv-string.cc | 3 +
 gcc/config/riscv/riscv-v.cc | 13 +-
 .../riscv/riscv-vector-builtins-bases.cc | 18 +-
 .../riscv/riscv-vector-builtins-bases.h | 19 +
 .../riscv/riscv-vector-builtins-shapes.cc | 149 +
 .../riscv/riscv-vector-builtins-shapes.h | 3 +
 .../riscv/riscv-vector-builtins-types.def | 120 +
 gcc/config/riscv/riscv-vector-builtins.cc | 315 +-
 gcc/config/riscv/riscv-vector-builtins.h | 5 +-
 gcc/config/riscv/riscv-vector-switch.def | 150 +-
 gcc/config/riscv/riscv.cc | 46 +-
 gcc/config/riscv/riscv.h | 4 +
 gcc/config/riscv/riscv.opt | 2 +
 gcc/config/riscv/riscv_th_vector.h | 49 +
 gcc/config/riscv/t-riscv | 16 +
 .../riscv/thead-vector-builtins-functions.def | 659 ++++
 gcc/config/riscv/thead-vector-builtins.cc | 887 ++++++
 gcc/config/riscv/thead-vector-builtins.h | 123 +
 gcc/config/riscv/thead-vector.md | 2827 +++++++++++++++++
 gcc/config/riscv/vector-iterators.md | 186 +-
 gcc/config/riscv/vector.md | 44 +-
 .../riscv/predef-__riscv_th_v_intrinsic.c | 11 +
 .../gcc.target/riscv/rvv/base/abi-1.c | 2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
 .../gcc.target/riscv/rvv/xtheadvector.c | 13 +
 .../riscv/rvv/xtheadvector/prefix.c | 12 +
 .../riscv/rvv/xtheadvector/vlb-vsb.c | 68 +
 .../riscv/rvv/xtheadvector/vlbu-vsb.c | 68 +
 .../riscv/rvv/xtheadvector/vlh-vsh.c | 68 +
 .../riscv/rvv/xtheadvector/vlhu-vsh.c | 68 +
 .../riscv/rvv/xtheadvector/vlw-vsw.c | 68 +
 .../riscv/rvv/xtheadvector/vlwu-vsw.c | 68 +
 gcc/testsuite/lib/target-supports.exp | 12 +
 39 files changed, 5931 insertions(+), 213 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-22 10:29         ` 回复:回复:[PATCH " joshua
@ 2023-12-22 10:31           ` juzhe.zhong
  2023-12-23  3:37             ` 回复:回复:[PATCH " joshua
  0 siblings, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-22 10:31 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 12268 bytes --]

Yeah.

(define_insn "@pred_msbc<mode>"
  [(set (match_operand:<VM> 0 "register_operand"        "=vr, vr, &vr")
  (unspec:<VM>
     [(minus:VI
       (match_operand:VI 1 "register_operand"     "  0, vr,  vr")
       (match_operand:VI 2 "register_operand"     " vr,  0,  vr"))
      (match_operand:<VM> 3 "register_operand"    " vm, vm,  vm")
      (unspec:<VM>
        [(match_operand 4 "vector_length_operand" " rK, rK,  rK")
         (match_operand 5 "const_int_operand"     "  i,  i,   i")
         (reg:SI VL_REGNUM)
         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
  "TARGET_VECTOR"
  "vmsbc.vvm\t%0,%1,%2,%3"
  [(set_attr "type" "vicalu")
   (set_attr "mode" "<MODE>")
   (set_attr "vl_op_idx" "4")
   (set (attr "avl_type_idx") (const_int 5))])

You should use an attribute to disable alternative 0 and alternative 1 constraint.


juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-22 18:29
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,
What xtheadvector needs to handle is just that destination vector register cannot overlap source vector register group for instructions like vmadc/vmsbc. That is not what group_overlap means. We nned to add "&" to the registers in the corresponding xtheadvector patterns while rvv 1.0 doesn't have this constraint.

(define_insn "@pred_th_msbc<mode>"
  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")
(unspec:<VM>
    [(minus:VI
      (match_operand:VI 1 "register_operand"     "  vr")
      (match_operand:VI 2 "register_operand"     " vr"))
    (match_operand:<VM> 3 "register_operand"    " vm")
    (unspec:<VM>
      [(match_operand 4 "vector_length_operand" " rK")
        (match_operand 5 "const_int_operand"     "  i")
        (reg:SI VL_REGNUM)
        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
  "TARGET_XTHEADVECTOR"
  "vmsbc.vvm\t%0,%1,%2,%3"
  [(set_attr "type" "vicalu")
  (set_attr "mode" "<MODE>")
  (set_attr "vl_op_idx" "4")
  (set (attr "avl_type_idx") (const_int 5))])

Joshua







------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月22日(星期五) 16:07
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension

You mean theadvector doesn't want the current RVV1.0 register overlap magic  as follows ?
The destination EEW is smaller than the source EEW and the overlap is in the lowest-numbered part of the source register group (e.g., when LMUL=1, vnsrl.wi v0, v0, 3 is legal, but a destination of v1 is not).
The destination EEW is greater than the source EEW, the source EMUL is at least 1, and the overlap is in the highest-numbered part of the destination register group (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not).

If yes, I suggest disable the overlap constraint using attribute, More details you can learn from 

(set_attr "group_overlap"


juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-22 11:33
收件人: 钟居哲; gcc-patches
抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu
主题: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,

Thank you for your comprehensive comments.

Classifying theadvector intrinsics into 3 kinds is really important to make our patchset more organized. 

For 1) and 3), I will split out the patches soon and hope they will be merged quickly.
For 2), according to the differences between vector and xtheadvector, it can be classfied into 3 kinds.

First is renamed load/store, renamed narrowing integer right shift, renamed narrowing fixed-point clip, and etc. I think we can use ASM targethook to rewrite the whole string of the instructions, although it will still be a heavy work.
Second is no pseudo instruction like vneg/vfneg. We will add these pseudo instructions in binutils to make xtheadvector more compatible with vector.
Third is that destination vector register cannot overlap source vector register group for vmadc/vmsbc/widen arithmetic/narrow arithmetic. Currently I cannot come up with any better way than pattern copy.  Do you have any suggestions?

Joshua




------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月21日(星期四) 07:04
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension

Hi, Joshua.

Thanks for working hard on clean up codes and support tons of work on theadvector.

After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.

1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.

Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)

So, I hope you can break this big patch into 3 different series patches.

1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.

I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.

For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")






juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-20 20:20
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
 
[1] https://github.com/T-head-Semi/thead-extension-spec/
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
 
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
 
We have run the GCC test suite and can confirm that there
are no regressions.
 
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html
 
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html
 
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples, 
and all the tests passed.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
 
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
 
---
gcc/common/config/riscv/riscv-common.cc       |   23 +
gcc/config.gcc                                |    4 +-
gcc/config/riscv/autovec.md                   |    2 +-
gcc/config/riscv/predicates.md                |    8 +-
gcc/config/riscv/riscv-c.cc                   |    8 +-
gcc/config/riscv/riscv-protos.h               |    1 +
gcc/config/riscv/riscv-string.cc              |    3 +
gcc/config/riscv/riscv-v.cc                   |   13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   18 +-
.../riscv/riscv-vector-builtins-bases.h       |   19 +
.../riscv/riscv-vector-builtins-shapes.cc     |  149 +
.../riscv/riscv-vector-builtins-shapes.h      |    3 +
.../riscv/riscv-vector-builtins-types.def     |  120 +
gcc/config/riscv/riscv-vector-builtins.cc     |  315 +-
gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
gcc/config/riscv/riscv-vector-switch.def      |  150 +-
gcc/config/riscv/riscv.cc                     |   46 +-
gcc/config/riscv/riscv.h                      |    4 +
gcc/config/riscv/riscv.opt                    |    2 +
gcc/config/riscv/riscv_th_vector.h            |   49 +
gcc/config/riscv/t-riscv                      |   16 +
.../riscv/thead-vector-builtins-functions.def |  659 ++++
gcc/config/riscv/thead-vector-builtins.cc     |  887 ++++++
gcc/config/riscv/thead-vector-builtins.h      |  123 +
gcc/config/riscv/thead-vector.md              | 2827 +++++++++++++++++
gcc/config/riscv/vector-iterators.md          |  186 +-
gcc/config/riscv/vector.md                    |   44 +-
.../riscv/predef-__riscv_th_v_intrinsic.c     |   11 +
.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
.../gcc.target/riscv/rvv/xtheadvector.c       |   13 +
.../riscv/rvv/xtheadvector/prefix.c           |   12 +
.../riscv/rvv/xtheadvector/vlb-vsb.c          |   68 +
.../riscv/rvv/xtheadvector/vlbu-vsb.c         |   68 +
.../riscv/rvv/xtheadvector/vlh-vsh.c          |   68 +
.../riscv/rvv/xtheadvector/vlhu-vsh.c         |   68 +
.../riscv/rvv/xtheadvector/vlw-vsw.c          |   68 +
.../riscv/rvv/xtheadvector/vlwu-vsw.c         |   68 +
gcc/testsuite/lib/target-supports.exp         |   12 +
39 files changed, 5931 insertions(+), 213 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-22  8:07       ` juzhe.zhong
  2023-12-22 10:29         ` 回复:回复:[PATCH " joshua
@ 2023-12-22 17:21         ` Jeff Law
  1 sibling, 0 replies; 69+ messages in thread
From: Jeff Law @ 2023-12-22 17:21 UTC (permalink / raw)
  To: juzhe.zhong, cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, christoph.muellner,
	jinma, cooper.qu



On 12/22/23 01:07, juzhe.zhong@rivai.ai wrote:
> You mean theadvector doesn't want the current RVV1.0 register overlap 
> magic  as follows ?
> 
>   *
> 
>     The destination EEW is smaller than the source EEW and the overlap
>     is in the lowest-numbered part of the source register group (e.g.,
>     when LMUL=1, |vnsrl.wi v0, v0, 3| is legal, but a destination of
>     |v1| is not).
> 
>   *
> 
>     The destination EEW is greater than the source EEW, the source EMUL
>     is at least 1, and the overlap is in the highest-numbered part of
>     the destination register group (e.g., when LMUL=8, |vzext.vf4 v0,
>     v6| is legal, but a source of |v0|, |v2|, or |v4| is not).
> 
> 
> If yes, I suggest disable the overlap constraint using attribute, More 
> details you can learn from
Yea, if there's alternatives we want to allow for xthead, but not rvv or 
vice-versa, I would think the "enabled" attribute would be a reasonable 
option.  Essentially it allows alternatives to be available or 
unavailable based on the subtarget.

It sounds like this may be necessary because of differences in how 
overlap is handled across 0.7 vs 1.0.

Jeff
> 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-22 10:31           ` 回复:[PATCH " juzhe.zhong
@ 2023-12-23  3:37             ` joshua
  2023-12-23 22:52               ` 回复:[PATCH " 钟居哲
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-23  3:37 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner

[-- Attachment #1: Type: text/plain, Size: 13646 bytes --]

Hi Juzhe,
Sorry but I'm not quite familiar with the group_overlap framework. Could you take this pattern as an example to show how to disable an alternative in some target?
Joshua
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月22日(星期五) 18:32
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Yeah.
(define_insn "@pred_msbc<mode>"
 [(set (match_operand:<VM> 0 "register_operand" "=vr, vr, &vr")
 (unspec:<VM>
 [(minus:VI
 (match_operand:VI 1 "register_operand" " 0, vr, vr")
 (match_operand:VI 2 "register_operand" " vr, 0, vr"))
 (match_operand:<VM> 3 "register_operand" " vm, vm, vm")
 (unspec:<VM>
 [(match_operand 4 "vector_length_operand" " rK, rK, rK")
 (match_operand 5 "const_int_operand" " i, i, i")
 (reg:SI VL_REGNUM)
 (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
"TARGET_VECTOR"
"vmsbc.vvm\t%0,%1,%2,%3"
 [(set_attr "type" "vicalu")
 (set_attr "mode" "<MODE>")
 (set_attr "vl_op_idx" "4")
 (set (attr "avl_type_idx") (const_int 5))])
You should use an attribute to disable alternative 0 and alternative 1 constraint.
juzhe.zhong@rivai.ai
发件人: joshua <mailto:cooper.joshua@linux.alibaba.com >
发送时间: 2023-12-22 18:29
收件人: juzhe.zhong@rivai.ai <mailto:juzhe.zhong@rivai.ai >; gcc-patches <mailto:gcc-patches@gcc.gnu.org >
抄送: Jim Wilson <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; jeffreyalaw <mailto:jeffreyalaw@gmail.com >; christoph.muellner <mailto:christoph.muellner@vrull.eu >; jinma <mailto:jinma@linux.alibaba.com >; cooper.qu <mailto:cooper.qu@linux.alibaba.com >
主题: 回复:回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,
What xtheadvector needs to handle is just that destination vector register cannot overlap source vector register group for instructions like vmadc/vmsbc. That is not what group_overlap means. We nned to add "&" to the registers in the corresponding xtheadvector patterns while rvv 1.0 doesn't have this constraint.
(define_insn "@pred_th_msbc<mode>"
 [(set (match_operand:<VM> 0 "register_operand" "=&vr")
 (unspec:<VM>
 [(minus:VI
 (match_operand:VI 1 "register_operand" " vr")
 (match_operand:VI 2 "register_operand" " vr"))
 (match_operand:<VM> 3 "register_operand" " vm")
 (unspec:<VM>
 [(match_operand 4 "vector_length_operand" " rK")
 (match_operand 5 "const_int_operand" " i")
 (reg:SI VL_REGNUM)
 (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
 "TARGET_XTHEADVECTOR"
 "vmsbc.vvm\t%0,%1,%2,%3"
 [(set_attr "type" "vicalu")
 (set_attr "mode" "<MODE>")
 (set_attr "vl_op_idx" "4")
 (set (attr "avl_type_idx") (const_int 5))])
Joshua
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月22日(星期五) 16:07
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
You mean theadvector doesn't want the current RVV1.0 register overlap magic as follows ?

 * 
The destination EEW is smaller than the source EEW and the overlap is in the lowest-numbered part of the source register group (e.g., when LMUL=1, vnsrl.wi v0, v0, 3 is legal, but a destination of v1 is not).

 * 
The destination EEW is greater than the source EEW, the source EMUL is at least 1, and the overlap is in the highest-numbered part of the destination register group (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not).
If yes, I suggest disable the overlap constraint using attribute, More details you can learn from 
(set_attr "group_overlap"
juzhe.zhong@rivai.ai
发件人: joshua <mailto:cooper.joshua@linux.alibaba.com >
发送时间: 2023-12-22 11:33
收件人: 钟居哲 <mailto:juzhe.zhong@rivai.ai >; gcc-patches <mailto:gcc-patches@gcc.gnu.org >
抄送: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; Jeff Law <mailto:jeffreyalaw@gmail.com >; Christoph Müllner <mailto:christoph.muellner@vrull.eu >; jinma <mailto:jinma@linux.alibaba.com >; Cooper Qu <mailto:cooper.qu@linux.alibaba.com >
主题: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,
Thank you for your comprehensive comments.
Classifying theadvector intrinsics into 3 kinds is really important to make our patchset more organized. 
For 1) and 3), I will split out the patches soon and hope they will be merged quickly.
For 2), according to the differences between vector and xtheadvector, it can be classfied into 3 kinds.
First is renamed load/store, renamed narrowing integer right shift, renamed narrowing fixed-point clip, and etc. I think we can use ASM targethook to rewrite the whole string of the instructions, although it will still be a heavy work.
Second is no pseudo instruction like vneg/vfneg. We will add these pseudo instructions in binutils to make xtheadvector more compatible with vector.
Third is that destination vector register cannot overlap source vector register group for vmadc/vmsbc/widen arithmetic/narrow arithmetic. Currently I cannot come up with any better way than pattern copy. Do you have any suggestions?
Joshua
------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月21日(星期四) 07:04
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi, Joshua.
Thanks for working hard on clean up codes and support tons of work on theadvector.
After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.
1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.
Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)
So, I hope you can break this big patch into 3 different series patches.
1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.
I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.
For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >
Date: 2023-12-20 20:20
To: gcc-patches <mailto:gcc-patches@gcc.gnu.org >
CC: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; jeffreyalaw <mailto:jeffreyalaw@gmail.com >; christoph.muellner <mailto:christoph.muellner@vrull.eu >; juzhe.zhong <mailto:juzhe.zhong@rivai.ai >; Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >; Jin Ma <mailto:jinma@linux.alibaba.com >; Xianmiao Qu <mailto:cooper.qu@linux.alibaba.com >
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
[1] https://github.com/T-head-Semi/thead-extension-spec/ <https://github.com/T-head-Semi/thead-extension-spec/ >
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
We have run the GCC test suite and can confirm that there
are no regressions.
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html <https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html >
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html <https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html >
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples <https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples >, 
and all the tests passed.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
---
 gcc/common/config/riscv/riscv-common.cc | 23 +
 gcc/config.gcc | 4 +-
 gcc/config/riscv/autovec.md | 2 +-
 gcc/config/riscv/predicates.md | 8 +-
 gcc/config/riscv/riscv-c.cc | 8 +-
 gcc/config/riscv/riscv-protos.h | 1 +
 gcc/config/riscv/riscv-string.cc | 3 +
 gcc/config/riscv/riscv-v.cc | 13 +-
 .../riscv/riscv-vector-builtins-bases.cc | 18 +-
 .../riscv/riscv-vector-builtins-bases.h | 19 +
 .../riscv/riscv-vector-builtins-shapes.cc | 149 +
 .../riscv/riscv-vector-builtins-shapes.h | 3 +
 .../riscv/riscv-vector-builtins-types.def | 120 +
 gcc/config/riscv/riscv-vector-builtins.cc | 315 +-
 gcc/config/riscv/riscv-vector-builtins.h | 5 +-
 gcc/config/riscv/riscv-vector-switch.def | 150 +-
 gcc/config/riscv/riscv.cc | 46 +-
 gcc/config/riscv/riscv.h | 4 +
 gcc/config/riscv/riscv.opt | 2 +
 gcc/config/riscv/riscv_th_vector.h | 49 +
 gcc/config/riscv/t-riscv | 16 +
 .../riscv/thead-vector-builtins-functions.def | 659 ++++
 gcc/config/riscv/thead-vector-builtins.cc | 887 ++++++
 gcc/config/riscv/thead-vector-builtins.h | 123 +
 gcc/config/riscv/thead-vector.md | 2827 +++++++++++++++++
 gcc/config/riscv/vector-iterators.md | 186 +-
 gcc/config/riscv/vector.md | 44 +-
 .../riscv/predef-__riscv_th_v_intrinsic.c | 11 +
 .../gcc.target/riscv/rvv/base/abi-1.c | 2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
 .../gcc.target/riscv/rvv/xtheadvector.c | 13 +
 .../riscv/rvv/xtheadvector/prefix.c | 12 +
 .../riscv/rvv/xtheadvector/vlb-vsb.c | 68 +
 .../riscv/rvv/xtheadvector/vlbu-vsb.c | 68 +
 .../riscv/rvv/xtheadvector/vlh-vsh.c | 68 +
 .../riscv/rvv/xtheadvector/vlhu-vsh.c | 68 +
 .../riscv/rvv/xtheadvector/vlw-vsw.c | 68 +
 .../riscv/rvv/xtheadvector/vlwu-vsw.c | 68 +
 gcc/testsuite/lib/target-supports.exp | 12 +
 39 files changed, 5931 insertions(+), 213 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
  2023-12-23  3:37             ` 回复:回复:[PATCH " joshua
@ 2023-12-23 22:52               ` 钟居哲
  0 siblings, 0 replies; 69+ messages in thread
From: 钟居哲 @ 2023-12-23 22:52 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, Jeff Law,
	Christoph Müllner

[-- Attachment #1: Type: text/plain, Size: 13570 bytes --]

I suggest you send the first patch which support theadvector with only adding "th.".
After it's done, then we can talk about it later.



juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-23 11:37
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner
主题: 回复:回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,

Sorry but I'm not quite familiar with the group_overlap framework. Could you take this pattern as an example to show how to disable an alternative in some target?

Joshua

------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月22日(星期五) 18:32
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension

Yeah.

(define_insn "@pred_msbc<mode>"
  [(set (match_operand:<VM> 0 "register_operand"        "=vr, vr, &vr")
  (unspec:<VM>
     [(minus:VI
       (match_operand:VI 1 "register_operand"     "  0, vr,  vr")
       (match_operand:VI 2 "register_operand"     " vr,  0,  vr"))
      (match_operand:<VM> 3 "register_operand"    " vm, vm,  vm")
      (unspec:<VM>
        [(match_operand 4 "vector_length_operand" " rK, rK,  rK")
         (match_operand 5 "const_int_operand"     "  i,  i,   i")
         (reg:SI VL_REGNUM)
         (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
  "TARGET_VECTOR"
  "vmsbc.vvm\t%0,%1,%2,%3"
  [(set_attr "type" "vicalu")
   (set_attr "mode" "<MODE>")
   (set_attr "vl_op_idx" "4")
   (set (attr "avl_type_idx") (const_int 5))])

You should use an attribute to disable alternative 0 and alternative 1 constraint.


juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-22 18:29
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,
What xtheadvector needs to handle is just that destination vector register cannot overlap source vector register group for instructions like vmadc/vmsbc. That is not what group_overlap means. We nned to add "&" to the registers in the corresponding xtheadvector patterns while rvv 1.0 doesn't have this constraint.

(define_insn "@pred_th_msbc<mode>"
  [(set (match_operand:<VM> 0 "register_operand"        "=&vr")
(unspec:<VM>
    [(minus:VI
      (match_operand:VI 1 "register_operand"     "  vr")
      (match_operand:VI 2 "register_operand"     " vr"))
    (match_operand:<VM> 3 "register_operand"    " vm")
    (unspec:<VM>
      [(match_operand 4 "vector_length_operand" " rK")
        (match_operand 5 "const_int_operand"     "  i")
        (reg:SI VL_REGNUM)
        (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
  "TARGET_XTHEADVECTOR"
  "vmsbc.vvm\t%0,%1,%2,%3"
  [(set_attr "type" "vicalu")
  (set_attr "mode" "<MODE>")
  (set_attr "vl_op_idx" "4")
  (set (attr "avl_type_idx") (const_int 5))])

Joshua







------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月22日(星期五) 16:07
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension

You mean theadvector doesn't want the current RVV1.0 register overlap magic  as follows ?
The destination EEW is smaller than the source EEW and the overlap is in the lowest-numbered part of the source register group (e.g., when LMUL=1, vnsrl.wi v0, v0, 3 is legal, but a destination of v1 is not).
The destination EEW is greater than the source EEW, the source EMUL is at least 1, and the overlap is in the highest-numbered part of the destination register group (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not).

If yes, I suggest disable the overlap constraint using attribute, More details you can learn from 

(set_attr "group_overlap"


juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-22 11:33
收件人: 钟居哲; gcc-patches
抄送: jim.wilson.gcc; palmer; andrew; philipp.tomsich; Jeff Law; Christoph Müllner; jinma; Cooper Qu
主题: 回复:[PATCH v3 0/6] RISC-V: Support XTheadVector extension
Hi Juzhe,

Thank you for your comprehensive comments.

Classifying theadvector intrinsics into 3 kinds is really important to make our patchset more organized. 

For 1) and 3), I will split out the patches soon and hope they will be merged quickly.
For 2), according to the differences between vector and xtheadvector, it can be classfied into 3 kinds.

First is renamed load/store, renamed narrowing integer right shift, renamed narrowing fixed-point clip, and etc. I think we can use ASM targethook to rewrite the whole string of the instructions, although it will still be a heavy work.
Second is no pseudo instruction like vneg/vfneg. We will add these pseudo instructions in binutils to make xtheadvector more compatible with vector.
Third is that destination vector register cannot overlap source vector register group for vmadc/vmsbc/widen arithmetic/narrow arithmetic. Currently I cannot come up with any better way than pattern copy.  Do you have any suggestions?

Joshua




------------------------------------------------------------------
发件人:钟居哲 <juzhe.zhong@rivai.ai>
发送时间:2023年12月21日(星期四) 07:04
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; Jeff Law<jeffreyalaw@gmail.com>; "Christoph Müllner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; Cooper Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 0/6] RISC-V: Support XTheadVector extension

Hi, Joshua.

Thanks for working hard on clean up codes and support tons of work on theadvector.

After fully review this patch, I understand you have 3 kinds of theadvector intrinsics from the codebase of current RVV1.0 GCC.

1). instructions that can leverage all current codes of RVV1.0 intrinsic with simply adding "th." prefix directly.
2). instructions that leverage current MD patterns but with some tweak and patterns copy since they are not simply added "th.".
3). new instructions that current RVV1.0 doesn't have like vlb instructions.

Overal, 1) and 3) look reasonable to me. But 2) need me some time to figure out the better way to do that (Current this patch with copying patterns is not approach I like)

So, I hope you can break this big patch into 3 different series patches.

1. Support partial theadvector instructions which leverage directly from current RVV1.0 with simple adding "th." prefix.
2. Support totally different name theadvector instructions but share same patterns as RVV1.0 instructions.
3. Support new headvector instructions like vlib...etc.

I think 1 and 3 separate patches can be quickly merged after my more details reviewed and approved in the following patches you send like V4 ?.

For 2, it's a bit more complicate, but I think we can support like ARM and other targets, use ASM targethook to rewrite the whole string of the instructions.
For example, like strided load/store, you can know this instructions from attribute:
(set_attr "type" "vlds")






juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-20 20:20
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v3 0/6] RISC-V: Support XTheadVector extension
This patch series presents gcc implementation of the XTheadVector
extension [1].
 
[1] https://github.com/T-head-Semi/thead-extension-spec/
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in order not to
generate instructions that xtheadvector does not support,
causing 36 changes in vector.md.
 
For the th. prefix issue, we use current_output_insn and
the ASM_OUTPUT_OPCODE hook instead of directly modifying
patterns in vector.md.
 
We have run the GCC test suite and can confirm that there
are no regressions.
 
All the test results can be found in the following links,
Run without xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803686.html
 
Run with xtheadvector:
https://gcc.gnu.org/pipermail/gcc-testresults/2023-December/803687.html
 
Furthermore, we have run the tests in 
https://github.com/riscv-non-isa/rvv-intrinsic-doc/tree/main/examples, 
and all the tests passed.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
 
RISC-V: Refactor riscv-vector-builtins-bases.cc
RISC-V: Split csr_operand in predicates.md for vector patterns
RISC-V: Introduce XTheadVector as a subset of V1.0.0
RISC-V: Adds the prefix "th." for the instructions of XTheadVector
RISC-V: Handle differences between XTheadvector and Vector
RISC-V: Add support for xtheadvector-specific intrinsics
 
---
gcc/common/config/riscv/riscv-common.cc       |   23 +
gcc/config.gcc                                |    4 +-
gcc/config/riscv/autovec.md                   |    2 +-
gcc/config/riscv/predicates.md                |    8 +-
gcc/config/riscv/riscv-c.cc                   |    8 +-
gcc/config/riscv/riscv-protos.h               |    1 +
gcc/config/riscv/riscv-string.cc              |    3 +
gcc/config/riscv/riscv-v.cc                   |   13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   18 +-
.../riscv/riscv-vector-builtins-bases.h       |   19 +
.../riscv/riscv-vector-builtins-shapes.cc     |  149 +
.../riscv/riscv-vector-builtins-shapes.h      |    3 +
.../riscv/riscv-vector-builtins-types.def     |  120 +
gcc/config/riscv/riscv-vector-builtins.cc     |  315 +-
gcc/config/riscv/riscv-vector-builtins.h      |    5 +-
gcc/config/riscv/riscv-vector-switch.def      |  150 +-
gcc/config/riscv/riscv.cc                     |   46 +-
gcc/config/riscv/riscv.h                      |    4 +
gcc/config/riscv/riscv.opt                    |    2 +
gcc/config/riscv/riscv_th_vector.h            |   49 +
gcc/config/riscv/t-riscv                      |   16 +
.../riscv/thead-vector-builtins-functions.def |  659 ++++
gcc/config/riscv/thead-vector-builtins.cc     |  887 ++++++
gcc/config/riscv/thead-vector-builtins.h      |  123 +
gcc/config/riscv/thead-vector.md              | 2827 +++++++++++++++++
gcc/config/riscv/vector-iterators.md          |  186 +-
gcc/config/riscv/vector.md                    |   44 +-
.../riscv/predef-__riscv_th_v_intrinsic.c     |   11 +
.../gcc.target/riscv/rvv/base/abi-1.c         |    2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |    2 +-
.../gcc.target/riscv/rvv/xtheadvector.c       |   13 +
.../riscv/rvv/xtheadvector/prefix.c           |   12 +
.../riscv/rvv/xtheadvector/vlb-vsb.c          |   68 +
.../riscv/rvv/xtheadvector/vlbu-vsb.c         |   68 +
.../riscv/rvv/xtheadvector/vlh-vsh.c          |   68 +
.../riscv/rvv/xtheadvector/vlhu-vsh.c         |   68 +
.../riscv/rvv/xtheadvector/vlw-vsw.c          |   68 +
.../riscv/rvv/xtheadvector/vlwu-vsw.c         |   68 +
gcc/testsuite/lib/target-supports.exp         |   12 +
39 files changed, 5931 insertions(+), 213 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
create mode 100644 gcc/config/riscv/thead-vector-builtins.h
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-20 12:32   ` [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector Jun Sha (Joshua)
  2023-12-20 18:22     ` Jeff Law
@ 2023-12-25  6:25     ` Jun Sha (Joshua)
  2023-12-25  6:37       ` juzhe.zhong
  2023-12-25  8:14       ` [PATCH " Jun Sha (Joshua)
  1 sibling, 2 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-25  6:25 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions. In this version, we 
follow Kito's suggestions and only check the prefix is 'v', so that 
no extra attribute is needed.

gcc/ChangeLog:

	* config/riscv/riscv-protos.h (riscv_asm_output_opcode): 
	New function to add assembler insn code prefix/suffix.
	* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
	* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config/riscv/riscv-protos.h               |  1 +
 gcc/config/riscv/riscv.cc                     | 19 +++++++++++++++++++
 gcc/config/riscv/riscv.h                      |  4 ++++
 .../riscv/rvv/xtheadvector/prefix.c           | 12 ++++++++++++
 4 files changed, 36 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 31049ef7523..5ea54b45703 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -102,6 +102,7 @@ struct riscv_address_info {
 };
 
 /* Routines implemented in riscv.cc.  */
+extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p);
 extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
 extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
 extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 0d1cbc5cb5f..30e6ced5f3f 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode)
   return lmul;
 }
 
+/* Define ASM_OUTPUT_OPCODE to do anything special before
+   emitting an opcode.  */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return p;
+
+  if (current_output_insn == NULL_RTX)
+    return p;
+
+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (p[0] == 'v')
+    fputs ("th.", asm_out_file);
+
+  return p;
+}
+
 /* Implement TARGET_PRINT_OPERAND.  The RISCV-specific operand codes are:
 
    'h'	Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..c33361a254d 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
       asm_fprintf ((FILE), "%U%s", (NAME));				\
   } while (0)
 
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR)	\
+  (PTR) = riscv_asm_output_opcode(STREAM, PTR)
+
 #define JUMP_TABLES_IN_TEXT_SECTION 0
 #define CASE_VECTOR_MODE SImode
 #define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+  return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-20 12:34   ` [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector Jun Sha (Joshua)
  2023-12-20 14:00     ` 钟居哲
@ 2023-12-25  6:29     ` Jun Sha (Joshua)
  2023-12-29  1:46       ` Jun Sha (Joshua)
  1 sibling, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-25  6:29 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 

For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.

gcc/ChangeLog:

	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 120 +++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 16 files changed, 449 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md

diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..f3496d9e72e 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+        return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+        return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+        return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..ebbb25f8f2a
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,120 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v4 6/6] RISC-V: Add support for xtheadvector-specific intrinsics.
  2023-12-20 12:36   ` [PATCH v3 6/6] RISC-V: Add support for xtheadvector-specific intrinsics Jun Sha (Joshua)
@ 2023-12-25  6:31     ` Jun Sha (Joshua)
  2023-12-29  1:49       ` Jun Sha (Joshua)
  0 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-25  6:31 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch only involves the generation of xtheadvector
special load/store instructions and vext instructions.

gcc/ChangeLog:

	* config/riscv/riscv-vector-builtins-bases.cc
	(class th_loadstore_width): Define new builtin bases.
	(BASE): Define new builtin bases.
	* config/riscv/riscv-vector-builtins-bases.h:
	Define new builtin class.
	* config/riscv/riscv-vector-builtins-functions.def (vlsegff):
	Include thead-vector-builtins-functions.def.
	* config/riscv/riscv-vector-builtins-shapes.cc
	(struct th_loadstore_width_def): Define new builtin shapes.
	(struct th_indexed_loadstore_width_def):
	Define new builtin shapes.
	(SHAPE): Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-shapes.h:
	Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-types.def
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	(vint8m1_t): Add datatypes for XTheadVector.
	(vint8m2_t): Likewise.
	(vint8m4_t): Likewise.
	(vint8m8_t): Likewise.
	(vint16m1_t): Likewise.
	(vint16m2_t): Likewise.
	(vint16m4_t): Likewise.
	(vint16m8_t): Likewise.
	(vint32m1_t): Likewise.
	(vint32m2_t): Likewise.
	(vint32m4_t): Likewise.
	(vint32m8_t): Likewise.
	(vint64m1_t): Likewise.
	(vint64m2_t): Likewise.
	(vint64m4_t): Likewise.
	(vint64m8_t): Likewise.
	(vuint8m1_t): Likewise.
	(vuint8m2_t): Likewise.
	(vuint8m4_t): Likewise.
	(vuint8m8_t): Likewise.
	(vuint16m1_t): Likewise.
	(vuint16m2_t): Likewise.
	(vuint16m4_t): Likewise.
	(vuint16m8_t): Likewise.
	(vuint32m1_t): Likewise.
	(vuint32m2_t): Likewise.
	(vuint32m4_t): Likewise.
	(vuint32m8_t): Likewise.
	(vuint64m1_t): Likewise.
	(vuint64m2_t): Likewise.
	(vuint64m4_t): Likewise.
	(vuint64m8_t): Likewise.
	* config/riscv/riscv-vector-builtins.cc
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	* config/riscv/thead-vector-builtins-functions.def: New file.
	* config/riscv/thead-vector.md: Add new patterns.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c: New test.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 .../riscv/riscv-vector-builtins-shapes.cc     | 126 +++++++
 .../riscv/riscv-vector-builtins-shapes.h      |   3 +
 .../riscv/riscv-vector-builtins-types.def     | 120 +++++++
 gcc/config/riscv/riscv-vector-builtins.cc     | 313 +++++++++++++++++-
 gcc/config/riscv/riscv-vector-builtins.h      |   3 +
 gcc/config/riscv/t-riscv                      |  16 +
 .../riscv/thead-vector-builtins-functions.def |  39 +++
 gcc/config/riscv/thead-vector-builtins.cc     | 200 +++++++++++
 gcc/config/riscv/thead-vector-builtins.h      |  64 ++++
 gcc/config/riscv/thead-vector.md              | 255 +++++++++++++-
 11 files changed, 1138 insertions(+), 3 deletions(-)
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h

diff --git a/gcc/config.gcc b/gcc/config.gcc
index 1445d98c147..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,7 +547,7 @@ riscv*)
 	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
 	extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
-	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+	extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
 	d_target_objs="riscv-d.o"
 	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 6b49404a1fa..7d7c1f6f4b1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -211,6 +211,104 @@ struct indexed_loadstore_def : public function_shape
   }
 };
 
+/* th_loadstore_width_def class.  */
+struct th_loadstore_width_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return nullptr;
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
+
+/* th_indexed_loadstore_width_def class.  */
+struct th_indexed_loadstore_width_def : public function_shape
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    for (unsigned int pred_idx = 0; group.preds[pred_idx] != NUM_PRED_TYPES;
+	 ++pred_idx)
+      {
+	for (unsigned int vec_type_idx = 0;
+	     group.ops_infos.types[vec_type_idx].index != NUM_VECTOR_TYPES;
+	     ++vec_type_idx)
+	  {
+	   tree index_type = group.ops_infos.args[1].get_tree_type (
+	      group.ops_infos.types[vec_type_idx].index);
+	   if (!index_type)
+	      continue;
+	   build_one (b, group, pred_idx, vec_type_idx);
+	  }
+      }
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
 /* alu_def class.  */
 struct alu_def : public build_base
 {
@@ -632,6 +730,31 @@ struct reduc_alu_def : public build_base
   }
 };
 
+/* th_extract_def class.  */
+struct th_extract_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+      bool overloaded_p) const override
+  {
+    b.append_base_name (instance.base_name);
+    if (overloaded_p)
+      return b.finish_name ();
+    b.append_name (type_suffixes[instance.type.index].vector);
+    b.append_name (type_suffixes[instance.type.index].scalar);
+    return b.finish_name ();
+  }
+};
+
 /* scalar_move_def class.  */
 struct scalar_move_def : public build_base
 {
@@ -1011,6 +1134,8 @@ SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
 SHAPE(indexed_loadstore, indexed_loadstore)
+SHAPE(th_loadstore_width, th_loadstore_width)
+SHAPE(th_indexed_loadstore_width, th_indexed_loadstore_width)
 SHAPE(alu, alu)
 SHAPE(alu_frm, alu_frm)
 SHAPE(widen_alu, widen_alu)
@@ -1023,6 +1148,7 @@ SHAPE(move, move)
 SHAPE(mask_alu, mask_alu)
 SHAPE(reduc_alu, reduc_alu)
 SHAPE(reduc_alu_frm, reduc_alu_frm)
+SHAPE(th_extract, th_extract)
 SHAPE(scalar_move, scalar_move)
 SHAPE(vundefined, vundefined)
 SHAPE(misc, misc)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..a822ba05bdd 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -28,6 +28,8 @@ extern const function_shape *const vsetvl;
 extern const function_shape *const vsetvlmax;
 extern const function_shape *const loadstore;
 extern const function_shape *const indexed_loadstore;
+extern const function_shape *const th_loadstore_width;
+extern const function_shape *const th_indexed_loadstore_width;
 extern const function_shape *const alu;
 extern const function_shape *const alu_frm;
 extern const function_shape *const widen_alu;
@@ -41,6 +43,7 @@ extern const function_shape *const mask_alu;
 extern const function_shape *const reduc_alu;
 extern const function_shape *const reduc_alu_frm;
 extern const function_shape *const scalar_move;
+extern const function_shape *const th_extract;
 extern const function_shape *const vundefined;
 extern const function_shape *const misc;
 extern const function_shape *const vset;
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 6aa45ae9a7e..e373d29e51c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -24,12 +24,48 @@ along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_I_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_I8_OPS" macro include some signed integer (i8/i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I8_OPS
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I16_OPS" macro include some signed integer (i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I16_OPS
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I32_OPS" macro include some signed integer (i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I32_OPS
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_U_OPS
 #define DEF_RVV_U_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_U8_OPS" macro include some unsigned integer (i8/i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U8_OPS
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U16_OPS" macro include some unsigned integer (i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U16_OPS
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U32_OPS" macro include some unsigned integer (i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U32_OPS
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_F_OPS
@@ -362,6 +398,45 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_I8_OPS (vint8m1_t, 0)
+DEF_RVV_I8_OPS (vint8m2_t, 0)
+DEF_RVV_I8_OPS (vint8m4_t, 0)
+DEF_RVV_I8_OPS (vint8m8_t, 0)
+DEF_RVV_I8_OPS (vint16m1_t, 0)
+DEF_RVV_I8_OPS (vint16m2_t, 0)
+DEF_RVV_I8_OPS (vint16m4_t, 0)
+DEF_RVV_I8_OPS (vint16m8_t, 0)
+DEF_RVV_I8_OPS (vint32m1_t, 0)
+DEF_RVV_I8_OPS (vint32m2_t, 0)
+DEF_RVV_I8_OPS (vint32m4_t, 0)
+DEF_RVV_I8_OPS (vint32m8_t, 0)
+DEF_RVV_I8_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I16_OPS (vint16m1_t, 0)
+DEF_RVV_I16_OPS (vint16m2_t, 0)
+DEF_RVV_I16_OPS (vint16m4_t, 0)
+DEF_RVV_I16_OPS (vint16m8_t, 0)
+DEF_RVV_I16_OPS (vint32m1_t, 0)
+DEF_RVV_I16_OPS (vint32m2_t, 0)
+DEF_RVV_I16_OPS (vint32m4_t, 0)
+DEF_RVV_I16_OPS (vint32m8_t, 0)
+DEF_RVV_I16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I32_OPS (vint32m1_t, 0)
+DEF_RVV_I32_OPS (vint32m2_t, 0)
+DEF_RVV_I32_OPS (vint32m4_t, 0)
+DEF_RVV_I32_OPS (vint32m8_t, 0)
+DEF_RVV_I32_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_U_OPS (vuint8mf4_t, 0)
 DEF_RVV_U_OPS (vuint8mf2_t, 0)
@@ -385,6 +460,45 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_U8_OPS (vuint8m1_t, 0)
+DEF_RVV_U8_OPS (vuint8m2_t, 0)
+DEF_RVV_U8_OPS (vuint8m4_t, 0)
+DEF_RVV_U8_OPS (vuint8m8_t, 0)
+DEF_RVV_U8_OPS (vuint16m1_t, 0)
+DEF_RVV_U8_OPS (vuint16m2_t, 0)
+DEF_RVV_U8_OPS (vuint16m4_t, 0)
+DEF_RVV_U8_OPS (vuint16m8_t, 0)
+DEF_RVV_U8_OPS (vuint32m1_t, 0)
+DEF_RVV_U8_OPS (vuint32m2_t, 0)
+DEF_RVV_U8_OPS (vuint32m4_t, 0)
+DEF_RVV_U8_OPS (vuint32m8_t, 0)
+DEF_RVV_U8_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U16_OPS (vuint16m1_t, 0)
+DEF_RVV_U16_OPS (vuint16m2_t, 0)
+DEF_RVV_U16_OPS (vuint16m4_t, 0)
+DEF_RVV_U16_OPS (vuint16m8_t, 0)
+DEF_RVV_U16_OPS (vuint32m1_t, 0)
+DEF_RVV_U16_OPS (vuint32m2_t, 0)
+DEF_RVV_U16_OPS (vuint32m4_t, 0)
+DEF_RVV_U16_OPS (vuint32m8_t, 0)
+DEF_RVV_U16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U32_OPS (vuint32m1_t, 0)
+DEF_RVV_U32_OPS (vuint32m2_t, 0)
+DEF_RVV_U32_OPS (vuint32m4_t, 0)
+DEF_RVV_U32_OPS (vuint32m8_t, 0)
+DEF_RVV_U32_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
@@ -1356,7 +1470,13 @@ DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
 
 #undef DEF_RVV_I_OPS
+#undef DEF_RVV_I8_OPS
+#undef DEF_RVV_I16_OPS
+#undef DEF_RVV_I32_OPS
 #undef DEF_RVV_U_OPS
+#undef DEF_RVV_U8_OPS
+#undef DEF_RVV_U16_OPS
+#undef DEF_RVV_U32_OPS
 #undef DEF_RVV_F_OPS
 #undef DEF_RVV_B_OPS
 #undef DEF_RVV_WEXTI_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..461447afdef 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
 
 using namespace riscv_vector;
 
@@ -246,6 +247,63 @@ static const rvv_type_info iu_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u8_ops[] = {
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u16_ops[] = {
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u32_ops[] = {
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 /* A list of all types will be registered for intrinsic functions.  */
 static const rvv_type_info all_ops[] = {
 #define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
@@ -913,7 +971,32 @@ static CONSTEXPR const rvv_arg_type_info tuple_vcreate_args[]
 
 /* A list of args for vector_type func (vector_type) function.  */
 static CONSTEXPR const rvv_arg_type_info ext_vcreate_args[]
-  = {rvv_arg_type_info (RVV_BASE_vector),
+  = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, size_t)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, eew8_index_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, eew8_index_type, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, size_t, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
      rvv_arg_type_info_end};
 
 /* A list of none preds that will be registered for intrinsic functions.  */
@@ -1429,6 +1512,14 @@ static CONSTEXPR const rvv_op_info iu_shift_vvv_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      shift_vv_args /* Args */};
 
+/* A static operand information for scalar_type func (vector_type, size_t)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_x_s_u_ops
+  = {iu_ops,          /* Types */
+     OP_TYPE_vx,        /* Suffix */
+     rvv_arg_type_info (RVV_BASE_scalar), /* Return type */
+     v_size_args /* Args */};
+
 /* A static operand information for vector_type func (vector_type, size_t)
  * function registration. */
 static CONSTEXPR const rvv_op_info iu_shift_vvx_ops
@@ -2604,6 +2695,222 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
      rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
      ext_vcreate_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args  */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_size_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_size_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_size_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_size_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_size_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_size_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_index_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_index_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_index_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_index_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_index_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_index_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew8_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_index_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew16_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_index_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew32_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_index_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type,
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_size_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_size_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_size_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
 /* A list of all RVV base function types.  */
 static CONSTEXPR const function_type_info function_types[] = {
 #define DEF_RVV_TYPE_INDEX(                                                    \
@@ -2687,6 +2994,10 @@ static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
   {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
 #include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
 };
 
 /* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..234b6f7a196 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
   ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */
   ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */
   ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */
+  XTHEADVECTOR_EXT,   /* XTheadVector extension */
 };
 
 /* Enumerates the RVV operand types.  */
@@ -252,6 +253,8 @@ struct function_group_info
         return TARGET_ZVKSED;
       case ZVKSH_EXT:
         return TARGET_ZVKSH;
+      case XTHEADVECTOR_EXT:
+	return TARGET_XTHEADVECTOR;
       default:
         gcc_unreachable ();
     }
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
   $(RISCV_BUILTINS_H)
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
 
+thead-vector-builtins.o: \
+  $(srcdir)/config/riscv/thead-vector-builtins.cc \
+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+  rtx-vector-builder.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(RISCV_BUILTINS_H)
+	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+		$(srcdir)/config/riscv/thead-vector-builtins.cc
+
 riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
   $(SYSTEM_H) $(TM_H)
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..667820d4c3e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,39 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+DEF_RVV_FUNCTION (th_vlb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlhu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlwu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vsb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vlsb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlshu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlswu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlxb, th_indexed_loadstore_width, full_preds, i8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxh, th_indexed_loadstore_width, full_preds, i16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxw, th_indexed_loadstore_width, full_preds, i32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxbu, th_indexed_loadstore_width, full_preds, u8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxhu, th_indexed_loadstore_width, full_preds, u16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxwu, th_indexed_loadstore_width, full_preds, u32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vext_x_v, th_extract, none_preds, iu_x_s_u_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..c0002f255ee
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,200 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements
+ * th.vl(b/h/w)[u].v/th.vs(b/h/w)[u].v/th.vls(b/h/w)[u].v/th.vss(b/h/w)[u].v/
+ * th.vlx(b/h/w)[u].v/th.vs[u]x(b/h/w).v
+ * codegen.  */
+template<bool STORE_P, lst_type LST_TYPE, int UNSPEC>
+class th_loadstore_width : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return !STORE_P; }
+  bool apply_mask_policy_p () const override { return !STORE_P; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P || LST_TYPE == LST_INDEXED)
+      return true;
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    if (LST_TYPE == LST_INDEXED)
+      {
+	if (STORE_P)
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_store_width (UNSPEC, UNSPEC,
+					       e.vector_mode ()));
+	else
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else if (LST_TYPE == LST_STRIDED)
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_strided_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_strided_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov_width (UNSPEC, e.vector_mode ()));
+      }
+  }
+};
+
+/* Implements vext.x.v.  */
+class th_extract : public function_base
+{
+public:
+  bool apply_vl_p () const override { return false; }
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    return e.use_exact_insn (code_for_pred_th_extract (e.vector_mode ()));
+  }
+};
+
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vlb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLBU> th_vlbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vlh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLHU> th_vlhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vlw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLWU> th_vlwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vsb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vsh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSB> th_vlsb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSBU> th_vlsbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSH> th_vlsh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSHU> th_vlshu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSW> th_vlsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSWU> th_vlswu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSB> th_vssb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSH> th_vssh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSW> th_vssw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXB> th_vlxb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXBU> th_vlxbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXH> th_vlxh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXHU> th_vlxhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXW> th_vlxw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXWU> th_vlxwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXB> th_vsxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXH> th_vsxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXW> th_vsxw_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXB> th_vsuxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXH> th_vsuxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXW> th_vsuxw_obj;
+static CONSTEXPR const th_extract th_vext_x_v_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+   of class <NAME>_obj.  */
+#define BASE(NAME) \
+  namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vlb)
+BASE (th_vlh)
+BASE (th_vlw)
+BASE (th_vlbu)
+BASE (th_vlhu)
+BASE (th_vlwu)
+BASE (th_vsb)
+BASE (th_vsh)
+BASE (th_vsw)
+BASE (th_vlsb)
+BASE (th_vlsh)
+BASE (th_vlsw)
+BASE (th_vlsbu)
+BASE (th_vlshu)
+BASE (th_vlswu)
+BASE (th_vssb)
+BASE (th_vssh)
+BASE (th_vssw)
+BASE (th_vlxb)
+BASE (th_vlxh)
+BASE (th_vlxw)
+BASE (th_vlxbu)
+BASE (th_vlxhu)
+BASE (th_vlxwu)
+BASE (th_vsxb)
+BASE (th_vsxh)
+BASE (th_vsxw)
+BASE (th_vsuxb)
+BASE (th_vsuxh)
+BASE (th_vsuxw)
+BASE (th_vext_x_v)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..4720c6334d8
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,64 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vlb;
+extern const function_base *const th_vlh;
+extern const function_base *const th_vlw;
+extern const function_base *const th_vlbu;
+extern const function_base *const th_vlhu;
+extern const function_base *const th_vlwu;
+extern const function_base *const th_vsb;
+extern const function_base *const th_vsh;
+extern const function_base *const th_vsw;
+extern const function_base *const th_vlsb;
+extern const function_base *const th_vlsh;
+extern const function_base *const th_vlsw;
+extern const function_base *const th_vlsbu;
+extern const function_base *const th_vlshu;
+extern const function_base *const th_vlswu;
+extern const function_base *const th_vssb;
+extern const function_base *const th_vssh;
+extern const function_base *const th_vssw;
+extern const function_base *const th_vlxb;
+extern const function_base *const th_vlxh;
+extern const function_base *const th_vlxw;
+extern const function_base *const th_vlxbu;
+extern const function_base *const th_vlxhu;
+extern const function_base *const th_vlxwu;
+extern const function_base *const th_vsxb;
+extern const function_base *const th_vsxh;
+extern const function_base *const th_vsxw;
+extern const function_base *const th_vsuxb;
+extern const function_base *const th_vsuxh;
+extern const function_base *const th_vsuxw;
+extern const function_base *const th_vext_x_v;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
index ebbb25f8f2a..26107dae49b 100644
--- a/gcc/config/riscv/thead-vector.md
+++ b/gcc/config/riscv/thead-vector.md
@@ -1,7 +1,95 @@
 (define_c_enum "unspec" [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW
+  UNSPEC_TH_VLWU
+
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW
+  UNSPEC_TH_VLSWU
+
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VLXWU
+
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+
   UNSPEC_TH_VWLDST
 ])
 
+(define_int_iterator UNSPEC_TH_VLMEM_OP [
+  UNSPEC_TH_VLB UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW UNSPEC_TH_VLWU
+])
+
+(define_int_iterator UNSPEC_TH_VLSMEM_OP [
+  UNSPEC_TH_VLSB UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW UNSPEC_TH_VLSWU
+])
+
+(define_int_iterator UNSPEC_TH_VLXMEM_OP [
+  UNSPEC_TH_VLXB UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW UNSPEC_TH_VLXWU
+])
+
+(define_int_attr vlmem_op_attr [
+  (UNSPEC_TH_VLB "b") (UNSPEC_TH_VLBU "bu")
+  (UNSPEC_TH_VLH "h") (UNSPEC_TH_VLHU "hu")
+  (UNSPEC_TH_VLW "w") (UNSPEC_TH_VLWU "wu")
+  (UNSPEC_TH_VLSB "b") (UNSPEC_TH_VLSBU "bu")
+  (UNSPEC_TH_VLSH "h") (UNSPEC_TH_VLSHU "hu")
+  (UNSPEC_TH_VLSW "w") (UNSPEC_TH_VLSWU "wu")
+  (UNSPEC_TH_VLXB "b") (UNSPEC_TH_VLXBU "bu")
+  (UNSPEC_TH_VLXH "h") (UNSPEC_TH_VLXHU "hu")
+  (UNSPEC_TH_VLXW "w") (UNSPEC_TH_VLXWU "wu")
+  (UNSPEC_TH_VSUXB "b")
+  (UNSPEC_TH_VSUXH "h")
+  (UNSPEC_TH_VSUXW "w")
+])
+
+(define_int_attr vlmem_order_attr [
+  (UNSPEC_TH_VLXB "")
+  (UNSPEC_TH_VLXH "")
+  (UNSPEC_TH_VLXW "")
+  (UNSPEC_TH_VSUXB "u")
+  (UNSPEC_TH_VSUXH "u")
+  (UNSPEC_TH_VSUXW "u")
+])
+
+(define_int_iterator UNSPEC_TH_VSMEM_OP [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLW
+])
+
+(define_int_iterator UNSPEC_TH_VSSMEM_OP [
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSW
+])
+
+(define_int_iterator UNSPEC_TH_VSXMEM_OP [
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+])
+
 (define_mode_iterator V_VLS_VT [V VLS VT])
 (define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
 
@@ -117,4 +205,169 @@
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "<MODE>")
    (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
-   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
\ No newline at end of file
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+(define_expand "@pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand")
+	 (match_operand 4 "vector_length_operand")
+	 (match_operand 5 "const_int_operand")
+	 (match_operand 6 "const_int_operand")
+	 (match_operand 7 "const_int_operand")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "vector_move_operand")
+      (match_operand:V_VLS 2 "vector_merge_operand")))]
+  "TARGET_XTHEADVECTOR"
+  {})
+
+(define_insn_and_split "*pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"	    "=vr,    vr,    vd,     m,    vr,    vr")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand"	   "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")
+	 (match_operand 4 "vector_length_operand"	      "   rK,    rK,    rK,    rK,    rK,    rK")
+	 (match_operand 5 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 6 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 7 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "reg_or_mem_operand"	      "    m,     m,     m,    vr,    vr,    vr")
+      (match_operand:V_VLS 2 "vector_merge_operand"	    "    0,    vu,    vu,    vu,    vu,     0")))]
+  "(TARGET_XTHEADVECTOR
+    && (register_operand (operands[0], <MODE>mode)
+	|| register_operand (operands[3], <MODE>mode)))"
+  "@
+   vl<vlmem_op_attr>.v\t%0,%3%p1
+   vl<vlmem_op_attr>.v\t%0,%3
+   vl<vlmem_op_attr>.v\t%0,%3,%1.t
+   vs<vlmem_op_attr>.v\t%3,%0%p1
+   vmv.v.v\t%0,%3
+   vmv.v.v\t%0,%3"
+  "&& register_operand (operands[0], <MODE>mode)
+   && register_operand (operands[3], <MODE>mode)
+   && satisfies_constraint_vu (operands[2])
+   && INTVAL (operands[7]) == riscv_vector::VLMAX"
+  [(set (match_dup 0) (match_dup 3))]
+  ""
+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 3 "vector_length_operand"    "   rK")
+	     (match_operand 4 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSMEM_OP)
+	  (match_operand:VI 2 "register_operand"	 "    vr")
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vs<vlmem_op_attr>.v\t%2,%0%p1"
+  [(set_attr "type" "vste")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 4))
+   (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_strided_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	      "=vr,    vr,    vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 7 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 8 "const_int_operand"	"    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLSMEM_OP)
+	  (unspec:VI
+	    [(match_operand:VI 3 "memory_operand"	 "    m,     m,     m")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")] UNSPEC_TH_VLSMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"      "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vls<vlmem_op_attr>.v\t%0,%3,%z4%p1"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSSMEM_OP)
+	  (unspec:VI
+	    [(match_operand 2 "pmode_reg_or_0_operand"   "   rJ")
+	     (match_operand:VI 3 "register_operand"       "   vr")] UNSPEC_TH_VSSMEM_OP)
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vss<vlmem_op_attr>.v\t%3,%0,%z2%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_indexed_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	     "=vd, vr,vd, vr")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")
+	     (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")
+	     (match_operand 6 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 7 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 8 "const_int_operand"	 "  i,  i, i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLXMEM_OP)
+	  (unspec:VI
+	    [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:VI 4 "register_operand" " vr, vr,vr, vr")] UNSPEC_TH_VLXMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlx<vlmem_op_attr>.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vldux")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_indexed_<vlmem_order_attr>store_width<vlmem_op_attr><mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSXMEM_OP)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:VI 2 "register_operand" "  vr")
+	   (match_operand:VI 3 "register_operand"  "  vr")] UNSPEC_TH_VSXMEM_OP))]
+  "TARGET_XTHEADVECTOR"
+  "vs<vlmem_order_attr>x<vlmem_op_attr>.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_th_extract<mode>"
+  [(set (match_operand:<VEL> 0 "register_operand")
+	(unspec:<VEL>
+	  [(vec_select:<VEL>
+	     (match_operand:V_VLSI 1 "register_operand")
+	     (parallel [(match_operand:DI 2 "register_operand" "r")]))
+	   (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
+  "TARGET_XTHEADVECTOR"
+{})
+
+(define_insn "*pred_th_extract<mode>"
+  [(set (match_operand:<VEL> 0 "register_operand"   "=r")
+  (unspec:<VEL>
+    [(vec_select:<VEL>
+       (match_operand:V_VLSI 1 "register_operand" "vr")
+       (parallel [(match_operand:DI 2 "register_operand" "r")]))
+     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
+  "TARGET_XTHEADVECTOR"
+  "vext.x.v\t%0,%1,%2"
+  [(set_attr "type" "vimovvx")
+   (set_attr "mode" "<MODE>")])
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-25  6:25     ` [PATCH v4 " Jun Sha (Joshua)
@ 2023-12-25  6:37       ` juzhe.zhong
  2023-12-25  7:08         ` 回复:[PATCH " joshua
  2023-12-25  8:14       ` [PATCH " Jun Sha (Joshua)
  1 sibling, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-25  6:37 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, cooper.joshua, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 4321 bytes --]

+  if (current_output_insn == NULL_RTX)
+    return p;

What is this used for ?

How about:

+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (TARGET_XTHEADVECTOR && p[0] == 'v')
+    fputs ("th.", asm_out_file);

\ No newline at end of file

New line should be added into prefix.c



juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-25 14:25
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions. In this version, we 
follow Kito's suggestions and only check the prefix is 'v', so that 
no extra attribute is needed.
 
gcc/ChangeLog:
 
* config/riscv/riscv-protos.h (riscv_asm_output_opcode): 
New function to add assembler insn code prefix/suffix.
* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config/riscv/riscv-protos.h               |  1 +
gcc/config/riscv/riscv.cc                     | 19 +++++++++++++++++++
gcc/config/riscv/riscv.h                      |  4 ++++
.../riscv/rvv/xtheadvector/prefix.c           | 12 ++++++++++++
4 files changed, 36 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 31049ef7523..5ea54b45703 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -102,6 +102,7 @@ struct riscv_address_info {
};
/* Routines implemented in riscv.cc.  */
+extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p);
extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 0d1cbc5cb5f..30e6ced5f3f 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode)
   return lmul;
}
+/* Define ASM_OUTPUT_OPCODE to do anything special before
+   emitting an opcode.  */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return p;
+
+  if (current_output_insn == NULL_RTX)
+    return p;
+
+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (p[0] == 'v')
+    fputs ("th.", asm_out_file);
+
+  return p;
+}
+
/* Implement TARGET_PRINT_OPERAND.  The RISCV-specific operand codes are:
    'h' Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..c33361a254d 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
       asm_fprintf ((FILE), "%U%s", (NAME)); \
   } while (0)
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR) \
+  (PTR) = riscv_asm_output_opcode(STREAM, PTR)
+
#define JUMP_TABLES_IN_TEXT_SECTION 0
#define CASE_VECTOR_MODE SImode
#define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+  return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
\ No newline at end of file
-- 
2.17.1
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-25  6:37       ` juzhe.zhong
@ 2023-12-25  7:08         ` joshua
  2023-12-25  7:09           ` juzhe.zhong
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-25  7:08 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 5313 bytes --]

+ if (current_output_insn == NULL_RTX)
+ return p;
This is for inline assembly case.
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月25日(星期一) 14:37
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
+ if (current_output_insn == NULL_RTX)
+ return p;
What is this used for ?
How about:
+ /* We need to add th. prefix to all the xtheadvector
+ insturctions here.*/
+ if (TARGET_XTHEADVECTOR && p[0] == 'v')
+ fputs ("th.", asm_out_file);
\ No newline at end of file
New line should be added into prefix.c
juzhe.zhong@rivai.ai
From: Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >
Date: 2023-12-25 14:25
To: gcc-patches <mailto:gcc-patches@gcc.gnu.org >
CC: jim.wilson.gcc <mailto:jim.wilson.gcc@gmail.com >; palmer <mailto:palmer@dabbelt.com >; andrew <mailto:andrew@sifive.com >; philipp.tomsich <mailto:philipp.tomsich@vrull.eu >; jeffreyalaw <mailto:jeffreyalaw@gmail.com >; christoph.muellner <mailto:christoph.muellner@vrull.eu >; juzhe.zhong <mailto:juzhe.zhong@rivai.ai >; Jun Sha (Joshua) <mailto:cooper.joshua@linux.alibaba.com >; Jin Ma <mailto:jinma@linux.alibaba.com >; Xianmiao Qu <mailto:cooper.qu@linux.alibaba.com >
Subject: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions. In this version, we 
follow Kito's suggestions and only check the prefix is 'v', so that 
no extra attribute is needed.
gcc/ChangeLog:
 * config/riscv/riscv-protos.h (riscv_asm_output_opcode): 
 New function to add assembler insn code prefix/suffix.
 * config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
 * config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config/riscv/riscv-protos.h | 1 +
 gcc/config/riscv/riscv.cc | 19 +++++++++++++++++++
 gcc/config/riscv/riscv.h | 4 ++++
 .../riscv/rvv/xtheadvector/prefix.c | 12 ++++++++++++
 4 files changed, 36 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 31049ef7523..5ea54b45703 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -102,6 +102,7 @@ struct riscv_address_info {
 };
 /* Routines implemented in riscv.cc. */
+extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p);
 extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
 extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
 extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 0d1cbc5cb5f..30e6ced5f3f 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode)
 return lmul;
 }
+/* Define ASM_OUTPUT_OPCODE to do anything special before
+ emitting an opcode. */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+ if (!TARGET_XTHEADVECTOR)
+ return p;
+
+ if (current_output_insn == NULL_RTX)
+ return p;
+
+ /* We need to add th. prefix to all the xtheadvector
+ insturctions here.*/
+ if (p[0] == 'v')
+ fputs ("th.", asm_out_file);
+
+ return p;
+}
+
 /* Implement TARGET_PRINT_OPERAND. The RISCV-specific operand codes are:
 'h' Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..c33361a254d 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
 asm_fprintf ((FILE), "%U%s", (NAME)); \
 } while (0)
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR) \
+ (PTR) = riscv_asm_output_opcode(STREAM, PTR)
+
 #define JUMP_TABLES_IN_TEXT_SECTION 0
 #define CASE_VECTOR_MODE SImode
 #define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+ return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
\ No newline at end of file
-- 
2.17.1

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-25  7:08         ` 回复:[PATCH " joshua
@ 2023-12-25  7:09           ` juzhe.zhong
  0 siblings, 0 replies; 69+ messages in thread
From: juzhe.zhong @ 2023-12-25  7:09 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 5569 bytes --]

OK. This sub-patch is ok to commit after adding new line to prefix.c



juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-25 15:08
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
+  if (current_output_insn == NULL_RTX)
+    return p;

This is for inline assembly case.
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月25日(星期一) 14:37
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.

+  if (current_output_insn == NULL_RTX)
+    return p;

What is this used for ?

How about:

+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (TARGET_XTHEADVECTOR && p[0] == 'v')
+    fputs ("th.", asm_out_file);

\ No newline at end of file

New line should be added into prefix.c



juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-25 14:25
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions. In this version, we 
follow Kito's suggestions and only check the prefix is 'v', so that 
no extra attribute is needed.
 
gcc/ChangeLog:
 
* config/riscv/riscv-protos.h (riscv_asm_output_opcode): 
New function to add assembler insn code prefix/suffix.
* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config/riscv/riscv-protos.h               |  1 +
gcc/config/riscv/riscv.cc                     | 19 +++++++++++++++++++
gcc/config/riscv/riscv.h                      |  4 ++++
.../riscv/rvv/xtheadvector/prefix.c           | 12 ++++++++++++
4 files changed, 36 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 31049ef7523..5ea54b45703 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -102,6 +102,7 @@ struct riscv_address_info {
};
/* Routines implemented in riscv.cc.  */
+extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p);
extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 0d1cbc5cb5f..30e6ced5f3f 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode)
   return lmul;
}
+/* Define ASM_OUTPUT_OPCODE to do anything special before
+   emitting an opcode.  */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return p;
+
+  if (current_output_insn == NULL_RTX)
+    return p;
+
+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (p[0] == 'v')
+    fputs ("th.", asm_out_file);
+
+  return p;
+}
+
/* Implement TARGET_PRINT_OPERAND.  The RISCV-specific operand codes are:
    'h' Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..c33361a254d 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
       asm_fprintf ((FILE), "%U%s", (NAME)); \
   } while (0)
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR) \
+  (PTR) = riscv_asm_output_opcode(STREAM, PTR)
+
#define JUMP_TABLES_IN_TEXT_SECTION 0
#define CASE_VECTOR_MODE SImode
#define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+  return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
\ No newline at end of file
-- 
2.17.1
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-25  6:25     ` [PATCH v4 " Jun Sha (Joshua)
  2023-12-25  6:37       ` juzhe.zhong
@ 2023-12-25  8:14       ` Jun Sha (Joshua)
  2023-12-25  8:18         ` juzhe.zhong
  1 sibling, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-25  8:14 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions. In this version, we 
follow Kito's suggestions and only check the prefix is 'v', so that 
no extra attribute is needed.

gcc/ChangeLog:

	* config/riscv/riscv-protos.h (riscv_asm_output_opcode): 
	New function to add assembler insn code prefix/suffix.
	* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
	* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config/riscv/riscv-protos.h               |  1 +
 gcc/config/riscv/riscv.cc                     | 19 +++++++++++++++++++
 gcc/config/riscv/riscv.h                      |  4 ++++
 .../riscv/rvv/xtheadvector/prefix.c           | 12 ++++++++++++
 4 files changed, 36 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 31049ef7523..5ea54b45703 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -102,6 +102,7 @@ struct riscv_address_info {
 };
 
 /* Routines implemented in riscv.cc.  */
+extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p);
 extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
 extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
 extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 0d1cbc5cb5f..30e6ced5f3f 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode)
   return lmul;
 }
 
+/* Define ASM_OUTPUT_OPCODE to do anything special before
+   emitting an opcode.  */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return p;
+
+  if (current_output_insn == NULL_RTX)
+    return p;
+
+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (p[0] == 'v')
+    fputs ("th.", asm_out_file);
+
+  return p;
+}
+
 /* Implement TARGET_PRINT_OPERAND.  The RISCV-specific operand codes are:
 
    'h'	Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..c33361a254d 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
       asm_fprintf ((FILE), "%U%s", (NAME));				\
   } while (0)
 
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR)	\
+  (PTR) = riscv_asm_output_opcode(STREAM, PTR)
+
 #define JUMP_TABLES_IN_TEXT_SECTION 0
 #define CASE_VECTOR_MODE SImode
 #define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+  return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
  2023-12-25  8:14       ` [PATCH " Jun Sha (Joshua)
@ 2023-12-25  8:18         ` juzhe.zhong
  0 siblings, 0 replies; 69+ messages in thread
From: juzhe.zhong @ 2023-12-25  8:18 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, cooper.joshua, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 3960 bytes --]

OK.



juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-25 16:14
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector.
This patch adds th. prefix to all XTheadVector instructions by
implementing new assembly output functions. In this version, we 
follow Kito's suggestions and only check the prefix is 'v', so that 
no extra attribute is needed.
 
gcc/ChangeLog:
 
* config/riscv/riscv-protos.h (riscv_asm_output_opcode): 
New function to add assembler insn code prefix/suffix.
* config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise.
* config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config/riscv/riscv-protos.h               |  1 +
gcc/config/riscv/riscv.cc                     | 19 +++++++++++++++++++
gcc/config/riscv/riscv.h                      |  4 ++++
.../riscv/rvv/xtheadvector/prefix.c           | 12 ++++++++++++
4 files changed, 36 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
 
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 31049ef7523..5ea54b45703 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -102,6 +102,7 @@ struct riscv_address_info {
};
/* Routines implemented in riscv.cc.  */
+extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p);
extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx);
extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *);
extern int riscv_float_const_rtx_index_for_fli (rtx);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 0d1cbc5cb5f..30e6ced5f3f 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode)
   return lmul;
}
+/* Define ASM_OUTPUT_OPCODE to do anything special before
+   emitting an opcode.  */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return p;
+
+  if (current_output_insn == NULL_RTX)
+    return p;
+
+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (p[0] == 'v')
+    fputs ("th.", asm_out_file);
+
+  return p;
+}
+
/* Implement TARGET_PRINT_OPERAND.  The RISCV-specific operand codes are:
    'h' Print the high-part relocation associated with OP, after stripping
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 6df9ec73c5e..c33361a254d 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use);
       asm_fprintf ((FILE), "%U%s", (NAME)); \
   } while (0)
+#undef ASM_OUTPUT_OPCODE
+#define ASM_OUTPUT_OPCODE(STREAM, PTR) \
+  (PTR) = riscv_asm_output_opcode(STREAM, PTR)
+
#define JUMP_TABLES_IN_TEXT_SECTION 0
#define CASE_VECTOR_MODE SImode
#define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
new file mode 100644
index 00000000000..48867f4ddfb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c
@@ -0,0 +1,12 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */
+
+#include "riscv_vector.h"
+
+vint32m1_t
+prefix (vint32m1_t vx, vint32m1_t vy, size_t vl)
+{
+  return __riscv_vadd_vv_i32m1 (vx, vy, vl);
+}
+
+/* { dg-final { scan-assembler {\mth\.v\M} } } */
-- 
2.17.1
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc
  2023-12-20 18:14     ` Jeff Law
@ 2023-12-27  2:46       ` joshua
  2023-12-29  1:44       ` joshua
  1 sibling, 0 replies; 69+ messages in thread
From: joshua @ 2023-12-27  2:46 UTC (permalink / raw)
  To: Jeff Law, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong, Jin Ma, Xianmiao Qu

Hi Jeff,

Perhaps fold_fault_load cannot be moved to riscv-protos.h since
gimple_folder is declared in riscv-vector-builtins.h. It's not reasonable
to include riscv-vector-builtins.h in riscv-protos.h. 

In fact, fold_fault_load is defined specially for some builtin functions, and
it would be better to just prototype in riscv-vector-builtins-bases.h.

Joshua







------------------------------------------------------------------
发件人:Jeff Law <jeffreyalaw@gmail.com>
发送时间:2023年12月21日(星期四) 02:14
收件人:"Jun Sha (Joshua)"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; "christoph.muellner"<christoph.muellner@vrull.eu>; "juzhe.zhong"<juzhe.zhong@rivai.ai>; Jin Ma<jinma@linux.alibaba.com>; Xianmiao Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc




On 12/20/23 05:25, Jun Sha (Joshua) wrote:
> This patch moves the definition of the enums lst_type and
> frm_op_type into riscv-vector-builtins-bases.h and removes
> the static visibility of fold_fault_load(), so these
> can be used in other compile units.
> 
> gcc/ChangeLog:
> 
>  * config/riscv/riscv-vector-builtins-bases.cc (enum lst_type):
>  (enum frm_op_type): move to riscv-vector-builtins-bases.h
>  * config/riscv/riscv-vector-builtins-bases.h
>  (GCC_RISCV_VECTOR_BUILTINS_BASES_H): Add header files.
>  (enum lst_type): move from
>  (enum frm_op_type): riscv-vector-builtins-bases.cc
>  (fold_fault_load): riscv-vector-builtins-bases.cc
I'm largely hoping to leave the heavy review lifting here to Juzhe who 
knows GCC's RV vector bits as well as anyone.

Just one small issue.  Would it be better to prototype fold_fault_load 
elsewhere and avoid the gimple.h inclusion in 
riscv-vector-builtins-bases.h?  Perhaps riscv-protos.h?

You might consider prefixing the function name with riscv_.  It's not 
strictly necessary, but it appears to be relatively common in risc-v port.

Thanks,
Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns.
  2023-12-20 18:16     ` Jeff Law
@ 2023-12-27  2:49       ` joshua
  2023-12-28 15:50         ` Jeff Law
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-27  2:49 UTC (permalink / raw)
  To: Jeff Law, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong, Jin Ma, Xianmiao Qu

Hi Jeff,

Yes, I will change soemthing in vector_csr_operand in the following
patches.

Constraints will be added that the AVL cannot be encoded as an
immediate for xtheadvecotr vsetvl.

Joshua







------------------------------------------------------------------
发件人:Jeff Law <jeffreyalaw@gmail.com>
发送时间:2023年12月21日(星期四) 02:16
收件人:"Jun Sha (Joshua)"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; "christoph.muellner"<christoph.muellner@vrull.eu>; "juzhe.zhong"<juzhe.zhong@rivai.ai>; Jin Ma<jinma@linux.alibaba.com>; Xianmiao Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns.




On 12/20/23 05:27, Jun Sha (Joshua) wrote:
> This patch splits the definition of csr_operand in predicates.md.
> The newly defined vector_csr_operand has the same functionality
> as csr_operand but can only be used in vector patterns, so that
> changes for vector will not affect scalar patterns in files
> like riscv.md.
> 
> gcc/ChangeLog:
> 
>  * config/riscv/predicates.md (vector_csr_operand):
>  Define vector_csr_opeand for vector.
>  * config/riscv/vector.md:
>  Use newly defined csr_operand for vector.
So do you envision changing something in vector_csr_operand?  If not, 
then this doesn't make much sense.

Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns.
  2023-12-27  2:49       ` 回复:[PATCH " joshua
@ 2023-12-28 15:50         ` Jeff Law
  0 siblings, 0 replies; 69+ messages in thread
From: Jeff Law @ 2023-12-28 15:50 UTC (permalink / raw)
  To: joshua, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong, Jin Ma, Xianmiao Qu



On 12/26/23 19:49, joshua wrote:
> Hi Jeff,
> 
> Yes, I will change soemthing in vector_csr_operand in the following
> patches.

> 
> Constraints will be added that the AVL cannot be encoded as an
> immediate for xtheadvecotr vsetvl.
Ah.  Thanks.  Makes sense.

jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc
  2023-12-20 18:14     ` Jeff Law
  2023-12-27  2:46       ` 回复:[PATCH " joshua
@ 2023-12-29  1:44       ` joshua
  1 sibling, 0 replies; 69+ messages in thread
From: joshua @ 2023-12-29  1:44 UTC (permalink / raw)
  To: Jeff Law, gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich,
	christoph.muellner, juzhe.zhong

Hi Jeff,

Perhaps fold_fault_load cannot be moved to riscv-protos.h since
gimple_folder is declared in riscv-vector-builtins.h. It's not reasonable
to include riscv-vector-builtins.h in riscv-protos.h. 

In fact, fold_fault_load is defined specially for some builtin functions, and
it would be better to just prototype in riscv-vector-builtins-bases.h.

Joshua






------------------------------------------------------------------
发件人:Jeff Law <jeffreyalaw@gmail.com>
发送时间:2023年12月21日(星期四) 02:14
收件人:"Jun Sha (Joshua)"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:"jim.wilson.gcc"<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; "christoph.muellner"<christoph.muellner@vrull.eu>; "juzhe.zhong"<juzhe.zhong@rivai.ai>; Jin Ma<jinma@linux.alibaba.com>; Xianmiao Qu<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc




On 12/20/23 05:25, Jun Sha (Joshua) wrote:
> This patch moves the definition of the enums lst_type and
> frm_op_type into riscv-vector-builtins-bases.h and removes
> the static visibility of fold_fault_load(), so these
> can be used in other compile units.
> 
> gcc/ChangeLog:
> 
>  * config/riscv/riscv-vector-builtins-bases.cc (enum lst_type):
>  (enum frm_op_type): move to riscv-vector-builtins-bases.h
>  * config/riscv/riscv-vector-builtins-bases.h
>  (GCC_RISCV_VECTOR_BUILTINS_BASES_H): Add header files.
>  (enum lst_type): move from
>  (enum frm_op_type): riscv-vector-builtins-bases.cc
>  (fold_fault_load): riscv-vector-builtins-bases.cc
I'm largely hoping to leave the heavy review lifting here to Juzhe who 
knows GCC's RV vector bits as well as anyone.

Just one small issue.  Would it be better to prototype fold_fault_load 
elsewhere and avoid the gimple.h inclusion in 
riscv-vector-builtins-bases.h?  Perhaps riscv-protos.h?

You might consider prefixing the function name with riscv_.  It's not 
strictly necessary, but it appears to be relatively common in risc-v port.

Thanks,
Jeff

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-25  6:29     ` [PATCH v4 " Jun Sha (Joshua)
@ 2023-12-29  1:46       ` Jun Sha (Joshua)
  2023-12-29  1:58         ` juzhe.zhong
  0 siblings, 1 reply; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-29  1:46 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 

For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.

gcc/ChangeLog:

	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 17 files changed, 474 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md

diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
 
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
 			 gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+	return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+	return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+	return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v4 6/6] RISC-V: Add support for xtheadvector-specific intrinsics.
  2023-12-25  6:31     ` [PATCH v4 " Jun Sha (Joshua)
@ 2023-12-29  1:49       ` Jun Sha (Joshua)
  0 siblings, 0 replies; 69+ messages in thread
From: Jun Sha (Joshua) @ 2023-12-29  1:49 UTC (permalink / raw)
  To: gcc-patches
  Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, juzhe.zhong, Jun Sha (Joshua),
	Jin Ma, Xianmiao Qu

This patch only involves the generation of xtheadvector
special load/store instructions and vext instructions.

gcc/ChangeLog:

	* config/riscv/riscv-vector-builtins-bases.cc
	(class th_loadstore_width): Define new builtin bases.
	(BASE): Define new builtin bases.
	* config/riscv/riscv-vector-builtins-bases.h:
	Define new builtin class.
	* config/riscv/riscv-vector-builtins-functions.def (vlsegff):
	Include thead-vector-builtins-functions.def.
	* config/riscv/riscv-vector-builtins-shapes.cc
	(struct th_loadstore_width_def): Define new builtin shapes.
	(struct th_indexed_loadstore_width_def):
	Define new builtin shapes.
	(SHAPE): Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-shapes.h:
	Define new builtin shapes.
	* config/riscv/riscv-vector-builtins-types.def
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	(vint8m1_t): Add datatypes for XTheadVector.
	(vint8m2_t): Likewise.
	(vint8m4_t): Likewise.
	(vint8m8_t): Likewise.
	(vint16m1_t): Likewise.
	(vint16m2_t): Likewise.
	(vint16m4_t): Likewise.
	(vint16m8_t): Likewise.
	(vint32m1_t): Likewise.
	(vint32m2_t): Likewise.
	(vint32m4_t): Likewise.
	(vint32m8_t): Likewise.
	(vint64m1_t): Likewise.
	(vint64m2_t): Likewise.
	(vint64m4_t): Likewise.
	(vint64m8_t): Likewise.
	(vuint8m1_t): Likewise.
	(vuint8m2_t): Likewise.
	(vuint8m4_t): Likewise.
	(vuint8m8_t): Likewise.
	(vuint16m1_t): Likewise.
	(vuint16m2_t): Likewise.
	(vuint16m4_t): Likewise.
	(vuint16m8_t): Likewise.
	(vuint32m1_t): Likewise.
	(vuint32m2_t): Likewise.
	(vuint32m4_t): Likewise.
	(vuint32m8_t): Likewise.
	(vuint64m1_t): Likewise.
	(vuint64m2_t): Likewise.
	(vuint64m4_t): Likewise.
	(vuint64m8_t): Likewise.
	* config/riscv/riscv-vector-builtins.cc
	(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
	(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
	* config/riscv/thead-vector-builtins-functions.def: New file.
	* config/riscv/thead-vector.md: Add new patterns.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c: New test.
	* gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c: New test.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 .../riscv/riscv-vector-builtins-shapes.cc     | 126 +++++++
 .../riscv/riscv-vector-builtins-shapes.h      |   3 +
 .../riscv/riscv-vector-builtins-types.def     | 120 +++++++
 gcc/config/riscv/riscv-vector-builtins.cc     | 313 +++++++++++++++++-
 gcc/config/riscv/riscv-vector-builtins.h      |   3 +
 gcc/config/riscv/t-riscv                      |  16 +
 .../riscv/thead-vector-builtins-functions.def |  39 +++
 gcc/config/riscv/thead-vector-builtins.cc     | 200 +++++++++++
 gcc/config/riscv/thead-vector-builtins.h      |  64 ++++
 gcc/config/riscv/thead-vector.md              | 253 ++++++++++++++
 .../riscv/rvv/xtheadvector/vlb-vsb.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlbu-vsb.c         |  68 ++++
 .../riscv/rvv/xtheadvector/vlh-vsh.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlhu-vsh.c         |  68 ++++
 .../riscv/rvv/xtheadvector/vlw-vsw.c          |  68 ++++
 .../riscv/rvv/xtheadvector/vlwu-vsw.c         |  68 ++++
 17 files changed, 1545 insertions(+), 2 deletions(-)
 create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/thead-vector-builtins.cc
 create mode 100644 gcc/config/riscv/thead-vector-builtins.h
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c

diff --git a/gcc/config.gcc b/gcc/config.gcc
index 1445d98c147..4478395ab77 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -547,7 +547,7 @@ riscv*)
 	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o"
 	extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o"
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
-	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
+	extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o"
 	d_target_objs="riscv-d.o"
 	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 6b49404a1fa..7d7c1f6f4b1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -211,6 +211,104 @@ struct indexed_loadstore_def : public function_shape
   }
 };
 
+/* th_loadstore_width_def class.  */
+struct th_loadstore_width_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return nullptr;
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
+
+/* th_indexed_loadstore_width_def class.  */
+struct th_indexed_loadstore_width_def : public function_shape
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    for (unsigned int pred_idx = 0; group.preds[pred_idx] != NUM_PRED_TYPES;
+	 ++pred_idx)
+      {
+	for (unsigned int vec_type_idx = 0;
+	     group.ops_infos.types[vec_type_idx].index != NUM_VECTOR_TYPES;
+	     ++vec_type_idx)
+	  {
+	   tree index_type = group.ops_infos.args[1].get_tree_type (
+	      group.ops_infos.types[vec_type_idx].index);
+	   if (!index_type)
+	      continue;
+	   build_one (b, group, pred_idx, vec_type_idx);
+	  }
+      }
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+		  bool overloaded_p) const override
+  {
+
+    /* Return nullptr if it can not be overloaded.  */
+    if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+      return nullptr;
+
+    b.append_base_name (instance.base_name);
+    /* vop_v --> vop_v_<type>.  */
+    if (!overloaded_p)
+      {
+	/* vop --> vop_v.  */
+	b.append_name (operand_suffixes[instance.op_info->op]);
+	/* vop_v --> vop_v_<type>.  */
+	b.append_name (type_suffixes[instance.type.index].vector);
+      }
+
+    /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+       for vop_m C++ overloaded API.  */
+    if (overloaded_p && instance.pred == PRED_TYPE_m)
+      return b.finish_name ();
+    b.append_name (predication_suffixes[instance.pred]);
+    return b.finish_name ();
+  }
+};
+
 /* alu_def class.  */
 struct alu_def : public build_base
 {
@@ -632,6 +730,31 @@ struct reduc_alu_def : public build_base
   }
 };
 
+/* th_extract_def class.  */
+struct th_extract_def : public build_base
+{
+  void build (function_builder &b,
+	      const function_group_info &group) const override
+  {
+    /* Report an error if there is no xtheadvector.  */
+    if (!TARGET_XTHEADVECTOR)
+      return;
+
+    build_all (b, group);
+  }
+
+  char *get_name (function_builder &b, const function_instance &instance,
+      bool overloaded_p) const override
+  {
+    b.append_base_name (instance.base_name);
+    if (overloaded_p)
+      return b.finish_name ();
+    b.append_name (type_suffixes[instance.type.index].vector);
+    b.append_name (type_suffixes[instance.type.index].scalar);
+    return b.finish_name ();
+  }
+};
+
 /* scalar_move_def class.  */
 struct scalar_move_def : public build_base
 {
@@ -1011,6 +1134,8 @@ SHAPE(vsetvl, vsetvl)
 SHAPE(vsetvl, vsetvlmax)
 SHAPE(loadstore, loadstore)
 SHAPE(indexed_loadstore, indexed_loadstore)
+SHAPE(th_loadstore_width, th_loadstore_width)
+SHAPE(th_indexed_loadstore_width, th_indexed_loadstore_width)
 SHAPE(alu, alu)
 SHAPE(alu_frm, alu_frm)
 SHAPE(widen_alu, widen_alu)
@@ -1023,6 +1148,7 @@ SHAPE(move, move)
 SHAPE(mask_alu, mask_alu)
 SHAPE(reduc_alu, reduc_alu)
 SHAPE(reduc_alu_frm, reduc_alu_frm)
+SHAPE(th_extract, th_extract)
 SHAPE(scalar_move, scalar_move)
 SHAPE(vundefined, vundefined)
 SHAPE(misc, misc)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..a822ba05bdd 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -28,6 +28,8 @@ extern const function_shape *const vsetvl;
 extern const function_shape *const vsetvlmax;
 extern const function_shape *const loadstore;
 extern const function_shape *const indexed_loadstore;
+extern const function_shape *const th_loadstore_width;
+extern const function_shape *const th_indexed_loadstore_width;
 extern const function_shape *const alu;
 extern const function_shape *const alu_frm;
 extern const function_shape *const widen_alu;
@@ -41,6 +43,7 @@ extern const function_shape *const mask_alu;
 extern const function_shape *const reduc_alu;
 extern const function_shape *const reduc_alu_frm;
 extern const function_shape *const scalar_move;
+extern const function_shape *const th_extract;
 extern const function_shape *const vundefined;
 extern const function_shape *const misc;
 extern const function_shape *const vset;
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 6aa45ae9a7e..e373d29e51c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -24,12 +24,48 @@ along with GCC; see the file COPYING3. If not see
 #define DEF_RVV_I_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_I8_OPS" macro include some signed integer (i8/i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I8_OPS
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I16_OPS" macro include some signed integer (i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I16_OPS
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I32_OPS" macro include some signed integer (i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_I32_OPS
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_U_OPS
 #define DEF_RVV_U_OPS(TYPE, REQUIRE)
 #endif
 
+/* Use "DEF_RVV_U8_OPS" macro include some unsigned integer (i8/i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U8_OPS
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U16_OPS" macro include some unsigned integer (i16/i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U16_OPS
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U32_OPS" macro include some unsigned integer (i32/i64)
+   which will be iterated and registered as intrinsic functions.  */
+#ifndef DEF_RVV_U32_OPS
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE)
+#endif
+
 /* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
    iterated and registered as intrinsic functions.  */
 #ifndef DEF_RVV_F_OPS
@@ -362,6 +398,45 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_I8_OPS (vint8m1_t, 0)
+DEF_RVV_I8_OPS (vint8m2_t, 0)
+DEF_RVV_I8_OPS (vint8m4_t, 0)
+DEF_RVV_I8_OPS (vint8m8_t, 0)
+DEF_RVV_I8_OPS (vint16m1_t, 0)
+DEF_RVV_I8_OPS (vint16m2_t, 0)
+DEF_RVV_I8_OPS (vint16m4_t, 0)
+DEF_RVV_I8_OPS (vint16m8_t, 0)
+DEF_RVV_I8_OPS (vint32m1_t, 0)
+DEF_RVV_I8_OPS (vint32m2_t, 0)
+DEF_RVV_I8_OPS (vint32m4_t, 0)
+DEF_RVV_I8_OPS (vint32m8_t, 0)
+DEF_RVV_I8_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I16_OPS (vint16m1_t, 0)
+DEF_RVV_I16_OPS (vint16m2_t, 0)
+DEF_RVV_I16_OPS (vint16m4_t, 0)
+DEF_RVV_I16_OPS (vint16m8_t, 0)
+DEF_RVV_I16_OPS (vint32m1_t, 0)
+DEF_RVV_I16_OPS (vint32m2_t, 0)
+DEF_RVV_I16_OPS (vint32m4_t, 0)
+DEF_RVV_I16_OPS (vint32m8_t, 0)
+DEF_RVV_I16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I32_OPS (vint32m1_t, 0)
+DEF_RVV_I32_OPS (vint32m2_t, 0)
+DEF_RVV_I32_OPS (vint32m4_t, 0)
+DEF_RVV_I32_OPS (vint32m8_t, 0)
+DEF_RVV_I32_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_U_OPS (vuint8mf4_t, 0)
 DEF_RVV_U_OPS (vuint8mf2_t, 0)
@@ -385,6 +460,45 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
 DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
 
+DEF_RVV_U8_OPS (vuint8m1_t, 0)
+DEF_RVV_U8_OPS (vuint8m2_t, 0)
+DEF_RVV_U8_OPS (vuint8m4_t, 0)
+DEF_RVV_U8_OPS (vuint8m8_t, 0)
+DEF_RVV_U8_OPS (vuint16m1_t, 0)
+DEF_RVV_U8_OPS (vuint16m2_t, 0)
+DEF_RVV_U8_OPS (vuint16m4_t, 0)
+DEF_RVV_U8_OPS (vuint16m8_t, 0)
+DEF_RVV_U8_OPS (vuint32m1_t, 0)
+DEF_RVV_U8_OPS (vuint32m2_t, 0)
+DEF_RVV_U8_OPS (vuint32m4_t, 0)
+DEF_RVV_U8_OPS (vuint32m8_t, 0)
+DEF_RVV_U8_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U16_OPS (vuint16m1_t, 0)
+DEF_RVV_U16_OPS (vuint16m2_t, 0)
+DEF_RVV_U16_OPS (vuint16m4_t, 0)
+DEF_RVV_U16_OPS (vuint16m8_t, 0)
+DEF_RVV_U16_OPS (vuint32m1_t, 0)
+DEF_RVV_U16_OPS (vuint32m2_t, 0)
+DEF_RVV_U16_OPS (vuint32m4_t, 0)
+DEF_RVV_U16_OPS (vuint32m8_t, 0)
+DEF_RVV_U16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U32_OPS (vuint32m1_t, 0)
+DEF_RVV_U32_OPS (vuint32m2_t, 0)
+DEF_RVV_U32_OPS (vuint32m4_t, 0)
+DEF_RVV_U32_OPS (vuint32m8_t, 0)
+DEF_RVV_U32_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
 DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
 DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
 DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
@@ -1356,7 +1470,13 @@ DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
 DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
 
 #undef DEF_RVV_I_OPS
+#undef DEF_RVV_I8_OPS
+#undef DEF_RVV_I16_OPS
+#undef DEF_RVV_I32_OPS
 #undef DEF_RVV_U_OPS
+#undef DEF_RVV_U8_OPS
+#undef DEF_RVV_U16_OPS
+#undef DEF_RVV_U32_OPS
 #undef DEF_RVV_F_OPS
 #undef DEF_RVV_B_OPS
 #undef DEF_RVV_WEXTI_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 4e2c66c2de7..461447afdef 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -51,6 +51,7 @@
 #include "riscv-vector-builtins.h"
 #include "riscv-vector-builtins-shapes.h"
 #include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
 
 using namespace riscv_vector;
 
@@ -246,6 +247,63 @@ static const rvv_type_info iu_ops[] = {
 #include "riscv-vector-builtins-types.def"
   {NUM_VECTOR_TYPES, 0}};
 
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info i32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u8_ops[] = {
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u16_ops[] = {
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info u32_ops[] = {
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions.  */
+static const rvv_type_info iu32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+  {NUM_VECTOR_TYPES, 0}};
+
 /* A list of all types will be registered for intrinsic functions.  */
 static const rvv_type_info all_ops[] = {
 #define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
@@ -913,7 +971,32 @@ static CONSTEXPR const rvv_arg_type_info tuple_vcreate_args[]
 
 /* A list of args for vector_type func (vector_type) function.  */
 static CONSTEXPR const rvv_arg_type_info ext_vcreate_args[]
-  = {rvv_arg_type_info (RVV_BASE_vector),
+  = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, size_t)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, eew8_index_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, eew8_index_type, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_index_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_unsigned_vector),
+     rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, size_t, vector_type)
+ * function.  */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_size_args[]
+  = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+     rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
      rvv_arg_type_info_end};
 
 /* A list of none preds that will be registered for intrinsic functions.  */
@@ -1429,6 +1512,14 @@ static CONSTEXPR const rvv_op_info iu_shift_vvv_ops
      rvv_arg_type_info (RVV_BASE_vector), /* Return type */
      shift_vv_args /* Args */};
 
+/* A static operand information for scalar_type func (vector_type, size_t)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu_x_s_u_ops
+  = {iu_ops,          /* Types */
+     OP_TYPE_vx,        /* Suffix */
+     rvv_arg_type_info (RVV_BASE_scalar), /* Return type */
+     v_size_args /* Args */};
+
 /* A static operand information for vector_type func (vector_type, size_t)
  * function registration. */
 static CONSTEXPR const rvv_op_info iu_shift_vvx_ops
@@ -2604,6 +2695,222 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
      rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
      ext_vcreate_args /* Args */};
 
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args  */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_size_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_size_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_size_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_size_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_size_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_size_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_index_ops
+  = {i8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_index_ops
+  = {u8_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_index_ops
+  = {i16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_index_ops
+  = {u16_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_index_ops
+  = {i32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration.  */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_index_ops
+  = {u32_ops,				  /* Types  */
+     OP_TYPE_v,				  /* Suffix  */
+     rvv_arg_type_info (RVV_BASE_vector), /* Return type  */
+     scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew8_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_index_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew16_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_index_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew32_index_type,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_index_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type,
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_size_ops
+  = {iu8_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_size_ops
+  = {iu16_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration.  */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_size_ops
+  = {iu32_ops,				/* Types  */
+     OP_TYPE_v,				/* Suffix  */
+     rvv_arg_type_info (RVV_BASE_void), /* Return type  */
+     scalar_ptr_size_args /* Args */};
+
 /* A list of all RVV base function types.  */
 static CONSTEXPR const function_type_info function_types[] = {
 #define DEF_RVV_TYPE_INDEX(                                                    \
@@ -2687,6 +2994,10 @@ static function_group_info function_groups[] = {
 #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
   {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
 #include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)                         \
+  {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS},
+#include "thead-vector-builtins-functions.def"
 };
 
 /* The RVV types, with their built-in
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 4f38c09d73d..234b6f7a196 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum required_ext
   ZVKNHB_EXT,  /* Crypto vector Zvknhb sub-ext */
   ZVKSED_EXT,  /* Crypto vector Zvksed sub-ext */
   ZVKSH_EXT,   /* Crypto vector Zvksh sub-ext */
+  XTHEADVECTOR_EXT,   /* XTheadVector extension */
 };
 
 /* Enumerates the RVV operand types.  */
@@ -252,6 +253,8 @@ struct function_group_info
         return TARGET_ZVKSED;
       case ZVKSH_EXT:
         return TARGET_ZVKSH;
+      case XTHEADVECTOR_EXT:
+	return TARGET_XTHEADVECTOR;
       default:
         gcc_unreachable ();
     }
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 067771e3c97..09512092056 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \
   $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
   $(srcdir)/config/riscv/riscv-vector-builtins-types.def \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(srcdir)/config/riscv/thead-vector-builtins-functions.def \
   $(RISCV_BUILTINS_H)
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins.cc
@@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins-bases.cc
 
+thead-vector-builtins.o: \
+  $(srcdir)/config/riscv/thead-vector-builtins.cc \
+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \
+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+  rtx-vector-builder.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \
+  $(srcdir)/config/riscv/thead-vector-builtins.h \
+  $(RISCV_BUILTINS_H)
+	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+		$(srcdir)/config/riscv/thead-vector-builtins.cc
+
 riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \
   $(SYSTEM_H) $(TM_H)
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..667820d4c3e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,39 @@
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO)
+#endif
+
+#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT
+DEF_RVV_FUNCTION (th_vlb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlhu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlwu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vsb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vlsb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlshu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlswu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlxb, th_indexed_loadstore_width, full_preds, i8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxh, th_indexed_loadstore_width, full_preds, i16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxw, th_indexed_loadstore_width, full_preds, i32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxbu, th_indexed_loadstore_width, full_preds, u8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxhu, th_indexed_loadstore_width, full_preds, u16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxwu, th_indexed_loadstore_width, full_preds, u32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vext_x_v, th_extract, none_preds, iu_x_s_u_ops)
+#undef REQUIRED_EXTENSIONS
+
+#undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc
new file mode 100644
index 00000000000..c0002f255ee
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.cc
@@ -0,0 +1,200 @@
+/* function_base implementation for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-shapes.h"
+#include "riscv-vector-builtins-bases.h"
+#include "thead-vector-builtins.h"
+
+using namespace riscv_vector;
+
+namespace riscv_vector {
+
+/* Implements
+ * th.vl(b/h/w)[u].v/th.vs(b/h/w)[u].v/th.vls(b/h/w)[u].v/th.vss(b/h/w)[u].v/
+ * th.vlx(b/h/w)[u].v/th.vs[u]x(b/h/w).v
+ * codegen.  */
+template<bool STORE_P, lst_type LST_TYPE, int UNSPEC>
+class th_loadstore_width : public function_base
+{
+public:
+  bool apply_tail_policy_p () const override { return !STORE_P; }
+  bool apply_mask_policy_p () const override { return !STORE_P; }
+
+  unsigned int call_properties (const function_instance &) const override
+  {
+    if (STORE_P)
+      return CP_WRITE_MEMORY;
+    else
+      return CP_READ_MEMORY;
+  }
+
+  bool can_be_overloaded_p (enum predication_type_index pred) const override
+  {
+    if (STORE_P || LST_TYPE == LST_INDEXED)
+      return true;
+    return pred != PRED_TYPE_none;
+  }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    if (LST_TYPE == LST_INDEXED)
+      {
+	if (STORE_P)
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_store_width (UNSPEC, UNSPEC,
+					       e.vector_mode ()));
+	else
+	  return e.use_exact_insn (
+	    code_for_pred_indexed_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else if (LST_TYPE == LST_STRIDED)
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_strided_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_strided_load_width (UNSPEC, e.vector_mode ()));
+      }
+    else
+      {
+	if (STORE_P)
+	  return e.use_contiguous_store_insn (
+	    code_for_pred_store_width (UNSPEC, e.vector_mode ()));
+	else
+	  return e.use_contiguous_load_insn (
+	    code_for_pred_mov_width (UNSPEC, e.vector_mode ()));
+      }
+  }
+};
+
+/* Implements vext.x.v.  */
+class th_extract : public function_base
+{
+public:
+  bool apply_vl_p () const override { return false; }
+  bool apply_tail_policy_p () const override { return false; }
+  bool apply_mask_policy_p () const override { return false; }
+  bool use_mask_predication_p () const override { return false; }
+  bool has_merge_operand_p () const override { return false; }
+
+  rtx expand (function_expander &e) const override
+  {
+    gcc_assert (TARGET_XTHEADVECTOR);
+    return e.use_exact_insn (code_for_pred_th_extract (e.vector_mode ()));
+  }
+};
+
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vlb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLBU> th_vlbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vlh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLHU> th_vlhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vlw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLWU> th_vlwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vsb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vsh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSB> th_vlsb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSBU> th_vlsbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSH> th_vlsh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSHU> th_vlshu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSW> th_vlsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSWU> th_vlswu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSB> th_vssb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSH> th_vssh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSW> th_vssw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXB> th_vlxb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXBU> th_vlxbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXH> th_vlxh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXHU> th_vlxhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXW> th_vlxw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXWU> th_vlxwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXB> th_vsxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXH> th_vsxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXW> th_vsxw_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXB> th_vsuxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXH> th_vsuxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXW> th_vsuxw_obj;
+static CONSTEXPR const th_extract th_vext_x_v_obj;
+
+/* Declare the function base NAME, pointing it to an instance
+   of class <NAME>_obj.  */
+#define BASE(NAME) \
+  namespace bases { const function_base *const NAME = &NAME##_obj; }
+
+BASE (th_vlb)
+BASE (th_vlh)
+BASE (th_vlw)
+BASE (th_vlbu)
+BASE (th_vlhu)
+BASE (th_vlwu)
+BASE (th_vsb)
+BASE (th_vsh)
+BASE (th_vsw)
+BASE (th_vlsb)
+BASE (th_vlsh)
+BASE (th_vlsw)
+BASE (th_vlsbu)
+BASE (th_vlshu)
+BASE (th_vlswu)
+BASE (th_vssb)
+BASE (th_vssh)
+BASE (th_vssw)
+BASE (th_vlxb)
+BASE (th_vlxh)
+BASE (th_vlxw)
+BASE (th_vlxbu)
+BASE (th_vlxhu)
+BASE (th_vlxwu)
+BASE (th_vsxb)
+BASE (th_vsxh)
+BASE (th_vsxw)
+BASE (th_vsuxb)
+BASE (th_vsuxh)
+BASE (th_vsuxw)
+BASE (th_vext_x_v)
+
+} // end namespace riscv_vector
diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h
new file mode 100644
index 00000000000..4720c6334d8
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins.h
@@ -0,0 +1,64 @@
+/* function_base declaration for RISC-V XTheadVector Extension
+   for GNU compiler.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+   Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head
+   Semiconductor Co., Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_THEAD_VECTOR_BUILTINS_H
+#define GCC_THEAD_VECTOR_BUILTINS_H
+
+namespace riscv_vector {
+
+namespace bases {
+extern const function_base *const th_vlb;
+extern const function_base *const th_vlh;
+extern const function_base *const th_vlw;
+extern const function_base *const th_vlbu;
+extern const function_base *const th_vlhu;
+extern const function_base *const th_vlwu;
+extern const function_base *const th_vsb;
+extern const function_base *const th_vsh;
+extern const function_base *const th_vsw;
+extern const function_base *const th_vlsb;
+extern const function_base *const th_vlsh;
+extern const function_base *const th_vlsw;
+extern const function_base *const th_vlsbu;
+extern const function_base *const th_vlshu;
+extern const function_base *const th_vlswu;
+extern const function_base *const th_vssb;
+extern const function_base *const th_vssh;
+extern const function_base *const th_vssw;
+extern const function_base *const th_vlxb;
+extern const function_base *const th_vlxh;
+extern const function_base *const th_vlxw;
+extern const function_base *const th_vlxbu;
+extern const function_base *const th_vlxhu;
+extern const function_base *const th_vlxwu;
+extern const function_base *const th_vsxb;
+extern const function_base *const th_vsxh;
+extern const function_base *const th_vsxw;
+extern const function_base *const th_vsuxb;
+extern const function_base *const th_vsuxh;
+extern const function_base *const th_vsuxw;
+extern const function_base *const th_vext_x_v;
+}
+
+} // end namespace riscv_vector
+
+#endif
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
index af77e2a8a9e..d653b944c36 100644
--- a/gcc/config/riscv/thead-vector.md
+++ b/gcc/config/riscv/thead-vector.md
@@ -1,7 +1,95 @@
 (define_c_enum "unspec" [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW
+  UNSPEC_TH_VLWU
+
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW
+  UNSPEC_TH_VLSWU
+
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VLXWU
+
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+
   UNSPEC_TH_VWLDST
 ])
 
+(define_int_iterator UNSPEC_TH_VLMEM_OP [
+  UNSPEC_TH_VLB UNSPEC_TH_VLBU
+  UNSPEC_TH_VLH UNSPEC_TH_VLHU
+  UNSPEC_TH_VLW UNSPEC_TH_VLWU
+])
+
+(define_int_iterator UNSPEC_TH_VLSMEM_OP [
+  UNSPEC_TH_VLSB UNSPEC_TH_VLSBU
+  UNSPEC_TH_VLSH UNSPEC_TH_VLSHU
+  UNSPEC_TH_VLSW UNSPEC_TH_VLSWU
+])
+
+(define_int_iterator UNSPEC_TH_VLXMEM_OP [
+  UNSPEC_TH_VLXB UNSPEC_TH_VLXBU
+  UNSPEC_TH_VLXH UNSPEC_TH_VLXHU
+  UNSPEC_TH_VLXW UNSPEC_TH_VLXWU
+])
+
+(define_int_attr vlmem_op_attr [
+  (UNSPEC_TH_VLB "b") (UNSPEC_TH_VLBU "bu")
+  (UNSPEC_TH_VLH "h") (UNSPEC_TH_VLHU "hu")
+  (UNSPEC_TH_VLW "w") (UNSPEC_TH_VLWU "wu")
+  (UNSPEC_TH_VLSB "b") (UNSPEC_TH_VLSBU "bu")
+  (UNSPEC_TH_VLSH "h") (UNSPEC_TH_VLSHU "hu")
+  (UNSPEC_TH_VLSW "w") (UNSPEC_TH_VLSWU "wu")
+  (UNSPEC_TH_VLXB "b") (UNSPEC_TH_VLXBU "bu")
+  (UNSPEC_TH_VLXH "h") (UNSPEC_TH_VLXHU "hu")
+  (UNSPEC_TH_VLXW "w") (UNSPEC_TH_VLXWU "wu")
+  (UNSPEC_TH_VSUXB "b")
+  (UNSPEC_TH_VSUXH "h")
+  (UNSPEC_TH_VSUXW "w")
+])
+
+(define_int_attr vlmem_order_attr [
+  (UNSPEC_TH_VLXB "")
+  (UNSPEC_TH_VLXH "")
+  (UNSPEC_TH_VLXW "")
+  (UNSPEC_TH_VSUXB "u")
+  (UNSPEC_TH_VSUXH "u")
+  (UNSPEC_TH_VSUXW "u")
+])
+
+(define_int_iterator UNSPEC_TH_VSMEM_OP [
+  UNSPEC_TH_VLB
+  UNSPEC_TH_VLH
+  UNSPEC_TH_VLW
+])
+
+(define_int_iterator UNSPEC_TH_VSSMEM_OP [
+  UNSPEC_TH_VLSB
+  UNSPEC_TH_VLSH
+  UNSPEC_TH_VLSW
+])
+
+(define_int_iterator UNSPEC_TH_VSXMEM_OP [
+  UNSPEC_TH_VLXB
+  UNSPEC_TH_VLXH
+  UNSPEC_TH_VLXW
+  UNSPEC_TH_VSUXB
+  UNSPEC_TH_VSUXH
+  UNSPEC_TH_VSUXW
+])
+
 (define_mode_iterator V_VLS_VT [V VLS VT])
 (define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
 
@@ -140,3 +228,168 @@
   ""
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "SI")])
+
+(define_expand "@pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand")
+	 (match_operand 4 "vector_length_operand")
+	 (match_operand 5 "const_int_operand")
+	 (match_operand 6 "const_int_operand")
+	 (match_operand 7 "const_int_operand")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "vector_move_operand")
+      (match_operand:V_VLS 2 "vector_merge_operand")))]
+  "TARGET_XTHEADVECTOR"
+  {})
+
+(define_insn_and_split "*pred_mov_width<vlmem_op_attr><mode>"
+  [(set (match_operand:V_VLS 0 "nonimmediate_operand"	    "=vr,    vr,    vd,     m,    vr,    vr")
+    (if_then_else:V_VLS
+      (unspec:<VM>
+	[(match_operand:<VM> 1 "vector_mask_operand"	   "vmWc1,   Wc1,    vm, vmWc1,   Wc1,   Wc1")
+	 (match_operand 4 "vector_length_operand"	      "   rK,    rK,    rK,    rK,    rK,    rK")
+	 (match_operand 5 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 6 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (match_operand 7 "const_int_operand"		  "    i,     i,     i,     i,     i,     i")
+	 (reg:SI VL_REGNUM)
+	 (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+      (match_operand:V_VLS 3 "reg_or_mem_operand"	      "    m,     m,     m,    vr,    vr,    vr")
+      (match_operand:V_VLS 2 "vector_merge_operand"	    "    0,    vu,    vu,    vu,    vu,     0")))]
+  "(TARGET_XTHEADVECTOR
+    && (register_operand (operands[0], <MODE>mode)
+	|| register_operand (operands[3], <MODE>mode)))"
+  "@
+   vl<vlmem_op_attr>.v\t%0,%3%p1
+   vl<vlmem_op_attr>.v\t%0,%3
+   vl<vlmem_op_attr>.v\t%0,%3,%1.t
+   vs<vlmem_op_attr>.v\t%3,%0%p1
+   vmv.v.v\t%0,%3
+   vmv.v.v\t%0,%3"
+  "&& register_operand (operands[0], <MODE>mode)
+   && register_operand (operands[3], <MODE>mode)
+   && satisfies_constraint_vu (operands[2])
+   && INTVAL (operands[7]) == riscv_vector::VLMAX"
+  [(set (match_dup 0) (match_dup 3))]
+  ""
+  [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 3 "vector_length_operand"    "   rK")
+	     (match_operand 4 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSMEM_OP)
+	  (match_operand:VI 2 "register_operand"	 "    vr")
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vs<vlmem_op_attr>.v\t%2,%0%p1"
+  [(set_attr "type" "vste")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 4))
+   (set_attr "vl_op_idx" "3")])
+
+(define_insn "@pred_strided_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	      "=vr,    vr,    vd")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,   Wc1,    vm")
+	     (match_operand 5 "vector_length_operand"    "   rK,    rK,    rK")
+	     (match_operand 6 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 7 "const_int_operand"	"    i,     i,     i")
+	     (match_operand 8 "const_int_operand"	"    i,     i,     i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLSMEM_OP)
+	  (unspec:VI
+	    [(match_operand:VI 3 "memory_operand"	 "    m,     m,     m")
+	     (match_operand 4 "pmode_reg_or_0_operand"   "   rJ,    rJ,    rJ")] UNSPEC_TH_VLSMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"      "    0,    vu,    vu")))]
+  "TARGET_XTHEADVECTOR"
+  "vls<vlmem_op_attr>.v\t%0,%3,%z4%p1"
+  [(set_attr "type" "vlds")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "memory_operand"		 "+m")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSSMEM_OP)
+	  (unspec:VI
+	    [(match_operand 2 "pmode_reg_or_0_operand"   "   rJ")
+	     (match_operand:VI 3 "register_operand"       "   vr")] UNSPEC_TH_VSSMEM_OP)
+	  (match_dup 0)))]
+  "TARGET_XTHEADVECTOR"
+  "vss<vlmem_op_attr>.v\t%3,%0,%z2%p1"
+  [(set_attr "type" "vsts")
+   (set_attr "mode" "<MODE>")
+   (set (attr "avl_type_idx") (const_int 5))])
+
+(define_insn "@pred_indexed_load_width<vlmem_op_attr><mode>"
+  [(set (match_operand:VI 0 "register_operand"	     "=vd, vr,vd, vr")
+	(if_then_else:VI
+	  (unspec:<VM>
+	    [(match_operand:<VM> 1 "vector_mask_operand"  " vm,Wc1,vm,Wc1")
+	     (match_operand 5 "vector_length_operand"     " rK, rK,rK, rK")
+	     (match_operand 6 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 7 "const_int_operand"	 "  i,  i, i,  i")
+	     (match_operand 8 "const_int_operand"	 "  i,  i, i,  i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLXMEM_OP)
+	  (unspec:VI
+	    [(match_operand 3 "pmode_reg_or_0_operand"    " rJ, rJ,rJ, rJ")
+	     (mem:BLK (scratch))
+	     (match_operand:VI 4 "register_operand" " vr, vr,vr, vr")] UNSPEC_TH_VLXMEM_OP)
+	  (match_operand:VI 2 "vector_merge_operand"       " vu, vu, 0,  0")))]
+  "TARGET_XTHEADVECTOR"
+  "vlx<vlmem_op_attr>.v\t%0,(%z3),%4%p1"
+  [(set_attr "type" "vldux")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_indexed_<vlmem_order_attr>store_width<vlmem_op_attr><mode>"
+  [(set (mem:BLK (scratch))
+	(unspec:BLK
+	  [(unspec:<VM>
+	    [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+	     (match_operand 4 "vector_length_operand"    "   rK")
+	     (match_operand 5 "const_int_operand"	"    i")
+	     (reg:SI VL_REGNUM)
+	     (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSXMEM_OP)
+	   (match_operand 1 "pmode_reg_or_0_operand"      "  rJ")
+	   (match_operand:VI 2 "register_operand" "  vr")
+	   (match_operand:VI 3 "register_operand"  "  vr")] UNSPEC_TH_VSXMEM_OP))]
+  "TARGET_XTHEADVECTOR"
+  "vs<vlmem_order_attr>x<vlmem_op_attr>.v\t%3,(%z1),%2%p0"
+  [(set_attr "type" "vstux")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "@pred_th_extract<mode>"
+  [(set (match_operand:<VEL> 0 "register_operand")
+	(unspec:<VEL>
+	  [(vec_select:<VEL>
+	     (match_operand:V_VLSI 1 "register_operand")
+	     (parallel [(match_operand:DI 2 "register_operand" "r")]))
+	   (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
+  "TARGET_XTHEADVECTOR"
+{})
+
+(define_insn "*pred_th_extract<mode>"
+  [(set (match_operand:<VEL> 0 "register_operand"   "=r")
+  (unspec:<VEL>
+    [(vec_select:<VEL>
+       (match_operand:V_VLSI 1 "register_operand" "vr")
+       (parallel [(match_operand:DI 2 "register_operand" "r")]))
+     (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
+  "TARGET_XTHEADVECTOR"
+  "vext.x.v\t%0,%1,%2"
+  [(set_attr "type" "vimovvx")
+   (set_attr "mode" "<MODE>")])
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
new file mode 100644
index 00000000000..4e192bbf025
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out)
+{
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tu (v3, v2, v2, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_m (mask, v3, v3, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+**	th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+    vint32m1_t v4 = __riscv_vadd_vv_i32m1_tumu (mask, v3, v2, v2, 4);
+    __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
new file mode 100644
index 00000000000..1538afec68e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
new file mode 100644
index 00000000000..bf4924a1d76
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
new file mode 100644
index 00000000000..8c451845175
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
new file mode 100644
index 00000000000..0f5b09684a5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tu (v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_m (mask, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+**	th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+    vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tumu (mask, v, in, 4);
+    vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+    vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, x, 4);
+    __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
new file mode 100644
index 00000000000..aaa75be023d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+**	th.vsetivli\tzero,4,e32,m1,tu,ma
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tu (v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,ta,ma
+**	th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_m (mask, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+**	th.vsetvli\t[a-x0-9]+,zero,e8,mf4,ta,ma
+**	th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vsetivli\tzero,4,e32,m1,tu,mu
+**	th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+**	th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+**	th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+**	th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+**	th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+**	ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+    vbool32_t mask = *(vbool32_t*)in;
+    asm volatile ("":::"memory");
+    vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+    vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tumu (mask, v, in, 4);
+    vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+    vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+    __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  1:46       ` Jun Sha (Joshua)
@ 2023-12-29  1:58         ` juzhe.zhong
  2023-12-29  2:09           ` 回复:[PATCH " joshua
  0 siblings, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-29  1:58 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, cooper.joshua, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 69516 bytes --]

I am confused by the series patches.

I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?

Could clean up and resend the patches base on patch above (supposed it is merged already) ?



juzhe.zhong@rivai.ai
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |   2 +-
gcc/config/riscv/autovec.md                   |   2 +-
gcc/config/riscv/predicates.md                |   8 +-
gcc/config/riscv/riscv-string.cc              |   3 +
gcc/config/riscv/riscv-v.cc                   |  13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
gcc/config/riscv/riscv-vsetvl.cc              |  10 +
gcc/config/riscv/riscv.cc                     |  20 +-
gcc/config/riscv/riscv_th_vector.h            |  49 +++++
gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
gcc/config/riscv/vector.md                    |  36 +++-
.../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
gcc/testsuite/lib/target-supports.exp         |  12 ++
17 files changed, 474 insertions(+), 189 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o riscv-target-attr.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+ return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+ return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+ return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  1:58         ` juzhe.zhong
@ 2023-12-29  2:09           ` joshua
  2023-12-29  2:11             ` Re:[PATCH " joshua
  2023-12-29  2:14             ` 回复:[PATCH " juzhe.zhong
  0 siblings, 2 replies; 69+ messages in thread
From: joshua @ 2023-12-29  2:09 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

H Juzhe,

This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.

BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?

Joshua






------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector


I am confused by the series patches.


I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?


Could clean up and resend the patches base on patch above (supposed it is merged already) ?


juzhe.zhong@rivai.ai

 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector

This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 17 files changed, 474 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
 
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
 			 gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+	return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+	return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+	return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:09           ` 回复:[PATCH " joshua
@ 2023-12-29  2:11             ` joshua
  2023-12-29  2:14             ` 回复:[PATCH " juzhe.zhong
  1 sibling, 0 replies; 69+ messages in thread
From: joshua @ 2023-12-29  2:11 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

H Juzhe,

This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.

BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"? It adds support for new xtheadvector
instructions. Is it OK to be merged?

Joshua






------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector


I am confused by the series patches.


I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?


Could clean up and resend the patches base on patch above (supposed it is merged already) ?


juzhe.zhong@rivai.ai

 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector

This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 17 files changed, 474 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
 
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
 			 gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+	return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+	return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+	return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:09           ` 回复:[PATCH " joshua
  2023-12-29  2:11             ` Re:[PATCH " joshua
@ 2023-12-29  2:14             ` juzhe.zhong
  2023-12-29  2:17               ` Re:[PATCH " joshua
  1 sibling, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-29  2:14 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 71719 bytes --]

No, we should handle this carefully step by step.

First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.

I am confused by this patch for example:

 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))

I just checked upstream code, we don't have vector_csr_operand.

So, to make me easily review and trace the codes, plz send the patch better organized.

Thanks.


juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |   2 +-
gcc/config/riscv/autovec.md                   |   2 +-
gcc/config/riscv/predicates.md                |   8 +-
gcc/config/riscv/riscv-string.cc              |   3 +
gcc/config/riscv/riscv-v.cc                   |  13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
gcc/config/riscv/riscv-vsetvl.cc              |  10 +
gcc/config/riscv/riscv.cc                     |  20 +-
gcc/config/riscv/riscv_th_vector.h            |  49 +++++
gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
gcc/config/riscv/vector.md                    |  36 +++-
.../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
gcc/testsuite/lib/target-supports.exp         |  12 ++
17 files changed, 474 insertions(+), 189 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o riscv-target-attr.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+ return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+ return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+ return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:14             ` 回复:[PATCH " juzhe.zhong
@ 2023-12-29  2:17               ` joshua
  2023-12-29  2:22                 ` juzhe.zhong
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-29  2:17 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

Hi Juzhe,

For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.

Joshua





------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector


No, we should handle this carefully step by step.


First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.


I am confused by this patch for example:


 (define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))


I just checked upstream code, we don't have vector_csr_operand.


So, to make me easily review and trace the codes, plz send the patch better organized.


Thanks.
juzhe.zhong@rivai.ai

 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector

H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 17 files changed, 474 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
 
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
 			 gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+	return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+	return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+	return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:17               ` Re:[PATCH " joshua
@ 2023-12-29  2:22                 ` juzhe.zhong
  2023-12-29  2:25                   ` Re:Re:[PATCH " joshua
  0 siblings, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-29  2:22 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 73005 bytes --]

Why add vector_csr_operand ?
Why not use vector_length_operand?



juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-29 10:17
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
Hi Juzhe,
 
For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.
 
Joshua
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
No, we should handle this carefully step by step.
 
 
First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.
 
 
I am confused by this patch for example:
 
 
(define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))
 
 
I just checked upstream code, we don't have vector_csr_operand.
 
 
So, to make me easily review and trace the codes, plz send the patch better organized.
 
 
Thanks.
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |   2 +-
gcc/config/riscv/autovec.md                   |   2 +-
gcc/config/riscv/predicates.md                |   8 +-
gcc/config/riscv/riscv-string.cc              |   3 +
gcc/config/riscv/riscv-v.cc                   |  13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
gcc/config/riscv/riscv-vsetvl.cc              |  10 +
gcc/config/riscv/riscv.cc                     |  20 +-
gcc/config/riscv/riscv_th_vector.h            |  49 +++++
gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
gcc/config/riscv/vector.md                    |  36 +++-
.../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
gcc/testsuite/lib/target-supports.exp         |  12 ++
17 files changed, 474 insertions(+), 189 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o riscv-target-attr.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+ return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+ return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+ return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re:Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:22                 ` juzhe.zhong
@ 2023-12-29  2:25                   ` joshua
  2023-12-29  2:25                     ` Re:[PATCH " juzhe.zhong
  0 siblings, 1 reply; 69+ messages in thread
From: joshua @ 2023-12-29  2:25 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

We do not have vector_length_operand in vsetvl patterns.

(define_insn "@vsetvl<mode>"
  [(set (match_operand:P 0 "register_operand" "=r")
	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
		   (match_operand 2 "const_int_operand" "i")
		   (match_operand 3 "const_int_operand" "i")
		   (match_operand 4 "const_int_operand" "i")
		   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
   (set (reg:SI VL_REGNUM)
	(unspec:SI [(match_dup 1)
		    (match_dup 2)
		    (match_dup 3)] UNSPEC_VSETVL))
   (set (reg:SI VTYPE_REGNUM)
	(unspec:SI [(match_dup 2)
		    (match_dup 3)
		    (match_dup 4)
		    (match_dup 5)] UNSPEC_VSETVL))]
  "TARGET_VECTOR"
  "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
  [(set_attr "type" "vsetvl")
   (set_attr "mode" "<MODE>")
   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])







------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:22
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector


Why add vector_csr_operand ?
Why not use vector_length_operand?


juzhe.zhong@rivai.ai

 
发件人: joshua
发送时间: 2023-12-29 10:17
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector

Hi Juzhe,
 
For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.
 
Joshua
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
No, we should handle this carefully step by step.
 
 
First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.
 
 
I am confused by this patch for example:
 
 
 (define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))
 
 
I just checked upstream code, we don't have vector_csr_operand.
 
 
So, to make me easily review and trace the codes, plz send the patch better organized.
 
 
Thanks.
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 17 files changed, 474 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
 
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
 			 gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+	return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+	return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+	return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 
 
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:25                   ` Re:Re:[PATCH " joshua
@ 2023-12-29  2:25                     ` juzhe.zhong
  2023-12-29  2:30                       ` joshua
  0 siblings, 1 reply; 69+ messages in thread
From: juzhe.zhong @ 2023-12-29  2:25 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 75127 bytes --]

Chnage it into vector_length_operand.



juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-29 10:25
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
We do not have vector_length_operand in vsetvl patterns.
 
(define_insn "@vsetvl<mode>"
  [(set (match_operand:P 0 "register_operand" "=r")
(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
   (match_operand 2 "const_int_operand" "i")
   (match_operand 3 "const_int_operand" "i")
   (match_operand 4 "const_int_operand" "i")
   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
   (set (reg:SI VL_REGNUM)
(unspec:SI [(match_dup 1)
    (match_dup 2)
    (match_dup 3)] UNSPEC_VSETVL))
   (set (reg:SI VTYPE_REGNUM)
(unspec:SI [(match_dup 2)
    (match_dup 3)
    (match_dup 4)
    (match_dup 5)] UNSPEC_VSETVL))]
  "TARGET_VECTOR"
  "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
  [(set_attr "type" "vsetvl")
   (set_attr "mode" "<MODE>")
   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
 
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:22
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
Why add vector_csr_operand ?
Why not use vector_length_operand?
 
 
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:17
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
Hi Juzhe,
 
For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.
 
Joshua
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
No, we should handle this carefully step by step.
 
 
First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.
 
 
I am confused by this patch for example:
 
 
(define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))
 
 
I just checked upstream code, we don't have vector_csr_operand.
 
 
So, to make me easily review and trace the codes, plz send the patch better organized.
 
 
Thanks.
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |   2 +-
gcc/config/riscv/autovec.md                   |   2 +-
gcc/config/riscv/predicates.md                |   8 +-
gcc/config/riscv/riscv-string.cc              |   3 +
gcc/config/riscv/riscv-v.cc                   |  13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
gcc/config/riscv/riscv-vsetvl.cc              |  10 +
gcc/config/riscv/riscv.cc                     |  20 +-
gcc/config/riscv/riscv_th_vector.h            |  49 +++++
gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
gcc/config/riscv/vector.md                    |  36 +++-
.../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
gcc/testsuite/lib/target-supports.exp         |  12 ++
17 files changed, 474 insertions(+), 189 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o riscv-target-attr.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+ return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+ return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+ return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 
 
 
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:25                     ` Re:[PATCH " juzhe.zhong
@ 2023-12-29  2:30                       ` joshua
  2023-12-29  2:31                         ` juzhe.zhong
  2023-12-29  2:47                         ` juzhe.zhong
  0 siblings, 2 replies; 69+ messages in thread
From: joshua @ 2023-12-29  2:30 UTC (permalink / raw)
  To: juzhe.zhong, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

Hi Juzhe,

These vsetvl patterns were written by you with csr_operand initially.
Are you sure it can be repalced by vector_length_operand?

Joshua






------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:25
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector


Chnage it into vector_length_operand.


juzhe.zhong@rivai.ai

 
发件人: joshua
发送时间: 2023-12-29 10:25
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector

We do not have vector_length_operand in vsetvl patterns.
 
(define_insn "@vsetvl<mode>"
  [(set (match_operand:P 0 "register_operand" "=r")
	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
		   (match_operand 2 "const_int_operand" "i")
		   (match_operand 3 "const_int_operand" "i")
		   (match_operand 4 "const_int_operand" "i")
		   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
   (set (reg:SI VL_REGNUM)
	(unspec:SI [(match_dup 1)
		    (match_dup 2)
		    (match_dup 3)] UNSPEC_VSETVL))
   (set (reg:SI VTYPE_REGNUM)
	(unspec:SI [(match_dup 2)
		    (match_dup 3)
		    (match_dup 4)
		    (match_dup 5)] UNSPEC_VSETVL))]
  "TARGET_VECTOR"
  "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
  [(set_attr "type" "vsetvl")
   (set_attr "mode" "<MODE>")
   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
 
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:22
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
Why add vector_csr_operand ?
Why not use vector_length_operand?
 
 
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:17
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
Hi Juzhe,
 
For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.
 
Joshua
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
No, we should handle this carefully step by step.
 
 
First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.
 
 
I am confused by this patch for example:
 
 
 (define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))
 
 
I just checked upstream code, we don't have vector_csr_operand.
 
 
So, to make me easily review and trace the codes, plz send the patch better organized.
 
 
Thanks.
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
	* config.gcc:  Add files for XTheadVector intrinsics.
	* config/riscv/autovec.md: Guard XTheadVector.
	* config/riscv/riscv-string.cc (expand_block_move):
	Guard XTheadVector.
	* config/riscv/riscv-v.cc (legitimize_move):
	New expansion.
	(get_prefer_tail_policy): Give specific value for tail.
	(get_prefer_mask_policy): Give specific value for mask.
	(vls_mode_valid_p): Avoid autovec.
	* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
	(build_one): New function.
	* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
	(DEF_THEAD_RVV_FUNCTION): Add new marcos.
	(check_required_extensions):
	(handle_pragma_vector):
	* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
	(RVV_REQUIRE_XTHEADVECTOR):
	Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
	(struct function_group_info):
	* config/riscv/riscv-vector-switch.def (ENTRY):
	Disable fractional mode for the XTheadVector extension.
	(TUPLE_ENTRY): Likewise.
	* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
	* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
	Guard XTheadVector.
	(riscv_v_adjust_bytesize): Likewise.
	(riscv_preferred_simd_mode): Likewsie.
	(riscv_autovectorize_vector_modes): Likewise.
	(riscv_vector_mode_supported_any_target_p): Likewise.
	(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
	* config/riscv/vector-iterators.md: Remove fractional LMUL.
	* config/riscv/vector.md: Include thead-vector.md.
	* config/riscv/riscv_th_vector.h: New file.
	* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
	* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
	* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
	* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
 gcc/config.gcc                                |   2 +-
 gcc/config/riscv/autovec.md                   |   2 +-
 gcc/config/riscv/predicates.md                |   8 +-
 gcc/config/riscv/riscv-string.cc              |   3 +
 gcc/config/riscv/riscv-v.cc                   |  13 +-
 .../riscv/riscv-vector-builtins-bases.cc      |   3 +
 .../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
 gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
 gcc/config/riscv/riscv-vsetvl.cc              |  10 +
 gcc/config/riscv/riscv.cc                     |  20 +-
 gcc/config/riscv/riscv_th_vector.h            |  49 +++++
 gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
 gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
 gcc/config/riscv/vector.md                    |  36 +++-
 .../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
 .../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
 gcc/testsuite/lib/target-supports.exp         |  12 ++
 17 files changed, 474 insertions(+), 189 deletions(-)
 create mode 100644 gcc/config/riscv/riscv_th_vector.h
 create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
 	extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
 	extra_objs="${extra_objs} thead.o riscv-target-attr.o"
 	d_target_objs="riscv-d.o"
-	extra_headers="riscv_vector.h"
+	extra_headers="riscv_vector.h riscv_th_vector.h"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
 	;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
 				   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
 
 (define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
 
 ;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
 ;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
 ;; Predicates for the V extension.
 (define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
 
 (define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
 	bnez a2, loop                   # Any more?
 	ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
 
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
 
+  if (TARGET_XTHEADVECTOR)
+      {
+	emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+					  RVV_VLMAX, GEN_INT(VLMAX)));
+	return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
 }
 
 /* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
 }
 
 /* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
 bool
 vls_mode_valid_p (machine_mode vls_mode)
 {
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
 
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
 
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
 			 gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
 
 namespace riscv_vector {
 
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
 /* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
 static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
 			 argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
 #endif
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
 ENTRY (RVVMF8BI, true, LMUL_1, 8)
 ENTRY (RVVMF4BI, true, LMUL_2, 4)
 ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
 ENTRY (RVVM4QI, true, LMUL_4, 2)
 ENTRY (RVVM2QI, true, LMUL_2, 4)
 ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8HI, true, LMUL_8, 2)
 ENTRY (RVVM4HI, true, LMUL_4, 4)
 ENTRY (RVVM2HI, true, LMUL_2, 8)
 ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
 ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
 ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
 ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
 ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32.  */
 ENTRY (RVVM8SI, true, LMUL_8, 4)
 ENTRY (RVVM4SI, true, LMUL_4, 8)
 ENTRY (RVVM2SI, true, LMUL_2, 16)
 ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
 ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
 ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
 ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
 ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
 
 /* Disable modes if !TARGET_VECTOR_ELEN_64.  */
 ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
 #endif
 
 TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
 TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
 TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
 TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
 
 TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
 TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
 
 TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
 TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
 TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
 TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
 
 TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
 TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+	return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+	return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+	return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
 {
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+	return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
 
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
 static machine_mode
 riscv_preferred_simd_mode (scalar_mode mode)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
 
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
 unsigned int
 riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
 {
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
 
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
 }
 
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
 #undef TARGET_PREFERRED_ELSE_VALUE
 #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+	(match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+				      RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:V_VLS_VT
+	  [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+	(unspec:VB
+	  [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+	   (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+	   (match_operand 3 "const_1_operand"         "  i, i, i")
+	   (reg:SI VL_REGNUM)
+	   (reg:SI VTYPE_REGNUM)]
+	UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 2)
+		    (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+	(unspec:SI
+	  [(match_operand 0 "const_int_operand" "i")
+	   (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+	(unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+		    (match_operand 1 "const_int_operand" "i")
+		    (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+	(unspec:SI [(match_dup 1)
+		    (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+	(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+		   (match_operand 2 "const_int_operand" "i")
+		   (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+	  (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+	  (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+	  (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
 ])
 
 (define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
 ;; allow the instruction and mode to be matched during combine et al.
 (define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
 
 (define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
 ])
 
 (define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
 ])
 
 (define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
 ])
 
 (define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
 ])
 
 (define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
 ])
 
 (define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
 
@@ -509,17 +509,17 @@
 ])
 
 (define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
 ])
 
 (define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
 
 (define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
 ])
 
 (define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
 ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
 ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
 (define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
 ])
 
 (define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
 ])
 
 (define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 ])
 
 (define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
 ])
 
 (define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 ])
 
 (define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
 
 (define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
 ])
 
 (define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
 ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
 
 (define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
 
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
 
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
 
 (define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
 ])
 
 (define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
 
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
 
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
 ;; check. However, we need default value of SEW for vsetvl instruction since there
 ;; is no field for ratio in the vsetvl instruction encoding.
 (define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
 			  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
 			  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
 			  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
 			  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
 			  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
 	 (const_int 8)
+	 (eq_attr "mode" "RVVMF16BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 16)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF32BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 32)
+	     (const_int 8))
+	 (eq_attr "mode" "RVVMF64BI")
+	   (if_then_else (match_test "TARGET_XTHEADVECTOR")
+	     (const_int 64)
+	     (const_int 8))
 	 (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
 			  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
 			  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
 	 (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
-	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
-	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
-	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+	 (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+	 (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+	 (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
 	 (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
 	 (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
 	 (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
 			  vislide1up,vislide1down,vfslide1up,vfslide1down,\
 			  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
 	   (const_int INVALID_ATTRIBUTE)
+	 (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+			       vlsegdff,vssegtux,vlsegdox,vlsegdux")
+	      (match_test "TARGET_XTHEADVECTOR"))
+	   (const_int INVALID_ATTRIBUTE)
 	 (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
 	 (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
 	 (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
 	 (symbol_ref "riscv_vector::FRM_DYN")]
 	(symbol_ref "riscv_vector::FRM_NONE")))
 
+(include "thead-vector.md")
+
 ;; -----------------------------------------------------------------
 ;; ---- Miscellaneous Operations
 ;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
 (define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
 	(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
 (define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
 	(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
 	  (any_extend:VWEXTI
 	    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
 	  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
 	  (any_extend:VQEXTI
 	    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
 	  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
 	  (any_extend:VOEXTI
 	    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
 	  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
 /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
 
 void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
 /* { dg-do compile } */
 /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
 
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
 }
 
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
 # Return 1 if we can execute code when using dg-add-options riscv_v
 
 proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 
 
 
 
 



^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:30                       ` joshua
@ 2023-12-29  2:31                         ` juzhe.zhong
  2023-12-29  2:47                         ` juzhe.zhong
  1 sibling, 0 replies; 69+ messages in thread
From: juzhe.zhong @ 2023-12-29  2:31 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 76357 bytes --]

Yes.



juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-29 10:30
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
Hi Juzhe,
 
These vsetvl patterns were written by you with csr_operand initially.
Are you sure it can be repalced by vector_length_operand?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:25
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
Chnage it into vector_length_operand.
 
 
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:25
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
We do not have vector_length_operand in vsetvl patterns.
 
(define_insn "@vsetvl<mode>"
  [(set (match_operand:P 0 "register_operand" "=r")
(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
   (match_operand 2 "const_int_operand" "i")
   (match_operand 3 "const_int_operand" "i")
   (match_operand 4 "const_int_operand" "i")
   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
   (set (reg:SI VL_REGNUM)
(unspec:SI [(match_dup 1)
    (match_dup 2)
    (match_dup 3)] UNSPEC_VSETVL))
   (set (reg:SI VTYPE_REGNUM)
(unspec:SI [(match_dup 2)
    (match_dup 3)
    (match_dup 4)
    (match_dup 5)] UNSPEC_VSETVL))]
  "TARGET_VECTOR"
  "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
  [(set_attr "type" "vsetvl")
   (set_attr "mode" "<MODE>")
   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
 
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:22
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
Why add vector_csr_operand ?
Why not use vector_length_operand?
 
 
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:17
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
Hi Juzhe,
 
For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.
 
Joshua
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
No, we should handle this carefully step by step.
 
 
First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.
 
 
I am confused by this patch for example:
 
 
(define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))
 
 
I just checked upstream code, we don't have vector_csr_operand.
 
 
So, to make me easily review and trace the codes, plz send the patch better organized.
 
 
Thanks.
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |   2 +-
gcc/config/riscv/autovec.md                   |   2 +-
gcc/config/riscv/predicates.md                |   8 +-
gcc/config/riscv/riscv-string.cc              |   3 +
gcc/config/riscv/riscv-v.cc                   |  13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
gcc/config/riscv/riscv-vsetvl.cc              |  10 +
gcc/config/riscv/riscv.cc                     |  20 +-
gcc/config/riscv/riscv_th_vector.h            |  49 +++++
gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
gcc/config/riscv/vector.md                    |  36 +++-
.../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
gcc/testsuite/lib/target-supports.exp         |  12 ++
17 files changed, 474 insertions(+), 189 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o riscv-target-attr.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+ return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+ return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+ return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 
 
 
 
 
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
  2023-12-29  2:30                       ` joshua
  2023-12-29  2:31                         ` juzhe.zhong
@ 2023-12-29  2:47                         ` juzhe.zhong
  1 sibling, 0 replies; 69+ messages in thread
From: juzhe.zhong @ 2023-12-29  2:47 UTC (permalink / raw)
  To: cooper.joshua, gcc-patches
  Cc: Jim Wilson, palmer, andrew, philipp.tomsich, jeffreyalaw,
	christoph.muellner, jinma, cooper.qu

[-- Attachment #1: Type: text/plain, Size: 77278 bytes --]

Btw,  I think this following code of (previous patch I approved ) is better changed :

+/* Define ASM_OUTPUT_OPCODE to do anything special before
+   emitting an opcode.  */
+const char *
+riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
+{
+  if (!TARGET_XTHEADVECTOR)
+    return p;
+
+  if (current_output_insn == NULL_RTX)
+    return p;
+
+  /* We need to add th. prefix to all the xtheadvector
+     insturctions here.*/
+  if (p[0] == 'v')
+    fputs ("th.", asm_out_file);
+
+  return p;
+}
into:
/* Define ASM_OUTPUT_OPCODE to do anything special before
   emitting an opcode.  */
const char *
riscv_asm_output_opcode (FILE *asm_out_file, const char *p)
{
  /* We need to add th. prefix to all the xtheadvector
     insturctions here.*/
  if (TARGET_XTHEADVECTOR && current_output_insn != NULL_RTX && p[0] == 'v')
    fputs ("th.", asm_out_file);
  return p;
}

For easier future maintain.



juzhe.zhong@rivai.ai
 
发件人: joshua
发送时间: 2023-12-29 10:30
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
Hi Juzhe,
 
These vsetvl patterns were written by you with csr_operand initially.
Are you sure it can be repalced by vector_length_operand?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:25
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
Chnage it into vector_length_operand.
 
 
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:25
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
We do not have vector_length_operand in vsetvl patterns.
 
(define_insn "@vsetvl<mode>"
  [(set (match_operand:P 0 "register_operand" "=r")
(unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
   (match_operand 2 "const_int_operand" "i")
   (match_operand 3 "const_int_operand" "i")
   (match_operand 4 "const_int_operand" "i")
   (match_operand 5 "const_int_operand" "i")] UNSPEC_VSETVL))
   (set (reg:SI VL_REGNUM)
(unspec:SI [(match_dup 1)
    (match_dup 2)
    (match_dup 3)] UNSPEC_VSETVL))
   (set (reg:SI VTYPE_REGNUM)
(unspec:SI [(match_dup 2)
    (match_dup 3)
    (match_dup 4)
    (match_dup 5)] UNSPEC_VSETVL))]
  "TARGET_VECTOR"
  "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
  [(set_attr "type" "vsetvl")
   (set_attr "mode" "<MODE>")
   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))
   (set (attr "ta") (symbol_ref "INTVAL (operands[4])"))
   (set (attr "ma") (symbol_ref "INTVAL (operands[5])"))])
 
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:22
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
Why add vector_csr_operand ?
Why not use vector_length_operand?
 
 
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:17
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: Re:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
Hi Juzhe,
 
For vector_csr_operand, please refer to
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641124.html.
 
Joshua
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 10:14
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
No, we should handle this carefully step by step.
 
 
First, after the the first kind of theadvector is merged, then we can talk about second kind of theadvector later.
 
 
I am confused by this patch for example:
 
 
(define_predicate "vector_csr_operand"-  (ior (match_operand 0 "const_csr_operand")-       (match_operand 0 "register_operand")))+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")+      (match_operand 0 "const_csr_operand"))+    (match_operand 0 "register_operand")))
 
 
I just checked upstream code, we don't have vector_csr_operand.
 
 
So, to make me easily review and trace the codes, plz send the patch better organized.
 
 
Thanks.
juzhe.zhong@rivai.ai
 
 
发件人: joshua
发送时间: 2023-12-29 10:09
收件人: juzhe.zhong@rivai.ai; gcc-patches
抄送: Jim Wilson; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; jinma; cooper.qu
主题: 回复:[PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
H Juzhe,
 
This patch "RISC-V: Handle differences between XTheadvector and
Vector" is addressing some code generation issues for RVV1.0
instructions that xtheadvector does not have, not with intrinsics.
 
BTW, what about the following patch " RISC-V: Add support for
xtheadvector-specific intrinsics"?It adds support new xtheadvector
instructions. Is it OK to be merged?
 
Joshua
 
 
 
 
 
 
------------------------------------------------------------------
发件人:juzhe.zhong@rivai.ai <juzhe.zhong@rivai.ai>
发送时间:2023年12月29日(星期五) 09:58
收件人:"cooper.joshua"<cooper.joshua@linux.alibaba.com>; "gcc-patches"<gcc-patches@gcc.gnu.org>
抄 送:Jim Wilson<jim.wilson.gcc@gmail.com>; palmer<palmer@dabbelt.com>; andrew<andrew@sifive.com>; "philipp.tomsich"<philipp.tomsich@vrull.eu>; jeffreyalaw<jeffreyalaw@gmail.com>; "christoph.muellner"<christoph.muellner@vrull.eu>; "cooper.joshua"<cooper.joshua@linux.alibaba.com>; jinma<jinma@linux.alibaba.com>; "cooper.qu"<cooper.qu@linux.alibaba.com>
主 题:Re: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
 
I am confused by the series patches.
 
 
I thought this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641417.html 
is enough to support partial theadvector that can leverage directly RVV1.0 ?
 
 
Could clean up and resend the patches base on patch above (supposed it is merged already) ?
 
 
juzhe.zhong@rivai.ai
 
 
From: Jun Sha (Joshua)
Date: 2023-12-29 09:46
To: gcc-patches
CC: jim.wilson.gcc; palmer; andrew; philipp.tomsich; jeffreyalaw; christoph.muellner; juzhe.zhong; Jun Sha (Joshua); Jin Ma; Xianmiao Qu
Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector
 
This patch is to handle the differences in instruction generation
between Vector and XTheadVector. In this version, we only support
partial xtheadvector instructions that leverage directly from current
RVV1.0 with simple adding "th." prefix. For different name xtheadvector
instructions but share same patterns as RVV1.0 instructions, we will
use ASM targethook to rewrite the whole string of the instructions in
the following patches. 
 
For some vector patterns that cannot be avoided, we use
"!TARGET_XTHEADVECTOR" to disable them in vector.md in order
not to generate instructions that xtheadvector does not support,
like vmv1r and vsext.vf2.
 
gcc/ChangeLog:
 
* config.gcc:  Add files for XTheadVector intrinsics.
* config/riscv/autovec.md: Guard XTheadVector.
* config/riscv/riscv-string.cc (expand_block_move):
Guard XTheadVector.
* config/riscv/riscv-v.cc (legitimize_move):
New expansion.
(get_prefer_tail_policy): Give specific value for tail.
(get_prefer_mask_policy): Give specific value for mask.
(vls_mode_valid_p): Avoid autovec.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
(build_one): New function.
* config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION):
(DEF_THEAD_RVV_FUNCTION): Add new marcos.
(check_required_extensions):
(handle_pragma_vector):
* config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR):
(RVV_REQUIRE_XTHEADVECTOR):
Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR.
(struct function_group_info):
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p):
Guard XTheadVector.
(riscv_v_adjust_bytesize): Likewise.
(riscv_preferred_simd_mode): Likewsie.
(riscv_autovectorize_vector_modes): Likewise.
(riscv_vector_mode_supported_any_target_p): Likewise.
(TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise.
* config/riscv/vector-iterators.md: Remove fractional LMUL.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/riscv_th_vector.h: New file.
* config/riscv/thead-vector.md: New file.
 
gcc/testsuite/ChangeLog:
 
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
* gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector.
* lib/target-supports.exp: Add target for XTheadVector.
 
Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
---
gcc/config.gcc                                |   2 +-
gcc/config/riscv/autovec.md                   |   2 +-
gcc/config/riscv/predicates.md                |   8 +-
gcc/config/riscv/riscv-string.cc              |   3 +
gcc/config/riscv/riscv-v.cc                   |  13 +-
.../riscv/riscv-vector-builtins-bases.cc      |   3 +
.../riscv/riscv-vector-builtins-shapes.cc     |  23 +++
gcc/config/riscv/riscv-vector-switch.def      | 150 +++++++-------
gcc/config/riscv/riscv-vsetvl.cc              |  10 +
gcc/config/riscv/riscv.cc                     |  20 +-
gcc/config/riscv/riscv_th_vector.h            |  49 +++++
gcc/config/riscv/thead-vector.md              | 142 +++++++++++++
gcc/config/riscv/vector-iterators.md          | 186 +++++++++---------
gcc/config/riscv/vector.md                    |  36 +++-
.../gcc.target/riscv/rvv/base/abi-1.c         |   2 +-
.../gcc.target/riscv/rvv/base/pragma-1.c      |   2 +-
gcc/testsuite/lib/target-supports.exp         |  12 ++
17 files changed, 474 insertions(+), 189 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector.md
 
diff --git a/gcc/config.gcc b/gcc/config.gcc
index f0676c830e8..1445d98c147 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -549,7 +549,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o riscv-target-attr.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md
index 8b8a92f10a1..1fac56c7095 100644
--- a/gcc/config/riscv/autovec.md
+++ b/gcc/config/riscv/autovec.md
@@ -2579,7 +2579,7 @@
   [(match_operand      0 "register_operand")
    (match_operand      1 "memory_operand")
    (match_operand:ANYI 2 "const_int_operand")]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   {
     riscv_vector::expand_rawmemchr(<MODE>mode, operands[0], operands[1],
   operands[2]);
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 30689ee0a6a..16f86c5ae97 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -64,8 +64,9 @@
        (match_operand 0 "register_operand")))
(define_predicate "vector_csr_operand"
-  (ior (match_operand 0 "const_csr_operand")
-       (match_operand 0 "register_operand")))
+  (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+      (match_operand 0 "const_csr_operand"))
+    (match_operand 0 "register_operand")))
;; V has 32-bit unsigned immediates.  This happens to be the same constraint as
;; the csr_operand, but it's not CSR related.
@@ -432,7 +433,8 @@
;; Predicates for the V extension.
(define_special_predicate "vector_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
-       (match_operand 0 "const_csr_operand")))
+      (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)")
+    (match_operand 0 "const_csr_operand"))))
(define_special_predicate "autovec_length_operand"
   (ior (match_operand 0 "pmode_register_operand")
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc
index 11c1f74d0b3..ec8f3486fd8 100644
--- a/gcc/config/riscv/riscv-string.cc
+++ b/gcc/config/riscv/riscv-string.cc
@@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in)
bnez a2, loop                   # Any more?
ret                             # Return
   */
+   if (TARGET_XTHEADVECTOR)
+    return false;
+
   gcc_assert (TARGET_VECTOR);
   HOST_WIDE_INT potential_ew
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 038ab084a37..5e9e45aecd2 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp)
       return true;
     }
+  if (TARGET_XTHEADVECTOR)
+      {
+ emit_insn (gen_pred_th_whole_mov (mode, dest, src,
+   RVV_VLMAX, GEN_INT(VLMAX)));
+ return true;
+      }
+
   if (riscv_v_ext_vls_mode_p (mode))
     {
       if (GET_MODE_NUNITS (mode).to_constant () <= 31)
@@ -1772,7 +1779,7 @@ get_prefer_tail_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return TAIL_ANY;
+  return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY;
}
/* Get prefer mask policy.  */
@@ -1783,7 +1790,7 @@ get_prefer_mask_policy ()
      compiler pick up either agnostic or undisturbed. Maybe we
      will have a compile option like -mprefer=agnostic to set
      this value???.  */
-  return MASK_ANY;
+  return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY;
}
/* Get avl_type rtx.  */
@@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode)
bool
vls_mode_valid_p (machine_mode vls_mode)
{
-  if (!TARGET_VECTOR)
+  if (!TARGET_VECTOR || TARGET_XTHEADVECTOR)
     return false;
   if (riscv_autovec_preference == RVV_SCALABLE)
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index c51affde353..2918c07ebf3 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -133,6 +133,9 @@ public:
       = get_vector_mode (QImode, GET_MODE_NUNITS (mode)).require ();
     e.add_input_operand (Pmode, gen_int_mode (get_vlmul (e8_mode), Pmode));
+    if (TARGET_XTHEADVECTOR)
+      return e.generate_insn (code_for_th_vsetvl_no_side_effects (Pmode));
+
     /* TAIL_ANY.  */
     e.add_input_operand (Pmode,
gen_int_mode (get_prefer_tail_policy (), Pmode));
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..6b49404a1fa 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,25 @@
namespace riscv_vector {
+/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are
+   valid for the function.  */
+
+static bool
+check_type (tree return_type, vec<tree> &argument_types)
+{
+  tree arg;
+  unsigned i;
+
+  if (!return_type)
+    return false;
+
+  FOR_EACH_VEC_ELT (argument_types, i, arg)
+    if (!arg)
+      return false;
+
+  return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
    mode suffix at index PAIR && bi and predication suffix at index pred_idx.  */
static void
@@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group,
     group.ops_infos.types[vec_type_idx].index);
   b.allocate_argument_types (function_instance, argument_types);
   b.apply_predication (function_instance, return_type, argument_types);
+
+  if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+    return;
+
   b.add_overloaded_function (function_instance, *group.shape);
   b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f7a66b34bae 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types.
#endif
/* Disable modes if TARGET_MIN_VLEN == 32.  */
-ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
-ENTRY (RVVMF32BI, true, LMUL_F4, 32)
-ENTRY (RVVMF16BI, true, LMUL_F2, 16)
+ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64)
+ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32)
+ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16)
ENTRY (RVVMF8BI, true, LMUL_1, 8)
ENTRY (RVVMF4BI, true, LMUL_2, 4)
ENTRY (RVVMF2BI, true, LMUL_4, 2)
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16.  */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32.  */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32.  */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64.  */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc
index eabaef80f89..c726253c107 100644
--- a/gcc/config/riscv/riscv-vsetvl.cc
+++ b/gcc/config/riscv/riscv-vsetvl.cc
@@ -1117,6 +1117,16 @@ public:
       avl = GEN_INT (0);
     rtx sew = gen_int_mode (get_sew (), Pmode);
     rtx vlmul = gen_int_mode (get_vlmul (), Pmode);
+
+    if (TARGET_XTHEADVECTOR) {
+      if (change_vtype_only_p ())
+ return gen_th_vsetvl_vtype_change_only (sew, vlmul);
+      else if (has_vl () && !ignore_vl)
+ return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul);
+      else
+ return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul);
+    }
+
     rtx ta = gen_int_mode (get_ta (), Pmode);
     rtx ma = gen_int_mode (get_ma (), Pmode);
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 30e6ced5f3f..d06401c46c8 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale)
{
   if (riscv_v_ext_vector_mode_p (mode))
     {
+      if (TARGET_XTHEADVECTOR)
+ return BYTES_PER_RISCV_VECTOR;
+
       poly_int64 nunits = GET_MODE_NUNITS (mode);
       poly_int64 mode_size = GET_MODE_SIZE (mode);
@@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void)
static machine_mode
riscv_preferred_simd_mode (scalar_mode mode)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::preferred_simd_mode (mode);
   return word_mode;
@@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n)
unsigned int
riscv_autovectorize_vector_modes (vector_modes *modes, bool all)
{
-  if (TARGET_VECTOR)
+  if (TARGET_VECTOR && !TARGET_XTHEADVECTOR)
     return riscv_vector::autovectorize_vector_modes (modes, all);
   return default_autovectorize_vector_modes (modes, all);
@@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
   return false;
}
+/* Implements target hook vector_mode_supported_any_target_p.  */
+
+static bool
+riscv_vector_mode_supported_any_target_p (machine_mode mode)
+{
+  if (TARGET_XTHEADVECTOR)
+    return false;
+  return true;
+}
+
/* Initialize the GCC target structure.  */
#undef TARGET_ASM_ALIGNED_HI_OP
#define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset)
#undef TARGET_PREFERRED_ELSE_VALUE
#define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value
+#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P
+#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p
+
struct gcc_target targetm = TARGET_INITIALIZER;
#include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..6f47e0c90a4
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+   Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <" target="_blank">" target="_blank">" target="_blank">" target="_blank">http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_th_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_th_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..af77e2a8a9e
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,142 @@
+(define_c_enum "unspec" [
+  UNSPEC_TH_VWLDST
+])
+
+(define_mode_iterator V_VLS_VT [V VLS VT])
+(define_mode_iterator V_VB_VLS_VT [V VB VLS VT])
+
+(define_split
+  [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand")
+ (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))]
+  "TARGET_XTHEADVECTOR"
+  [(const_int 0)]
+  {
+    emit_insn (gen_pred_th_whole_mov (<MODE>mode, operands[0], operands[1],
+       RVV_VLMAX, GEN_INT(riscv_vector::VLMAX)));
+    DONE;
+  })
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:V_VLS_VT
+   [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")])
+
+(define_insn_and_split "@pred_th_whole_mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand"  "=vr,vr, m")
+ (unspec:VB
+   [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr")
+    (match_operand 2 "vector_length_operand"   " rK, rK, rK")
+    (match_operand 3 "const_1_operand"         "  i, i, i")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)]
+ UNSPEC_TH_VWLDST))]
+  "TARGET_XTHEADVECTOR"
+  "@
+   vmv.v.v\t%0,%1
+   vle.v\t%0,%1
+   vse.v\t%1,%0"
+  "&& REG_P (operands[0]) && REG_P (operands[1])
+   && REGNO (operands[0]) == REGNO (operands[1])"
+  [(const_int 0)]
+  ""
+  [(set_attr "type" "vimov,vlds,vlds")
+   (set_attr "mode" "<MODE>")
+   (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED"))
+   (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED"))
+   (set (attr "avl_type_idx") (const_int 3))
+   (set_attr "vl_op_idx" "2")
+   (set (attr "sew") (const_int 8))
+   (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))])
+
+(define_insn "@th_vsetvl<mode>"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 2)
+     (match_dup 3)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\t%0,%1,e%2,%m3"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))])
+
+;; vsetvl zero,zero,vtype instruction.
+;; This pattern has no side effects and does not set X0 register.
+(define_insn "th_vsetvl_vtype_change_only"
+  [(set (reg:SI VTYPE_REGNUM)
+ (unspec:SI
+   [(match_operand 0 "const_int_operand" "i")
+    (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,zero,e%0,%m1"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))])
+
+;; vsetvl zero,rs1,vtype instruction.
+;; The reason we need this pattern since we should avoid setting X0 register
+;; in vsetvl instruction pattern.
+(define_insn "@th_vsetvl_discard_result<mode>"
+  [(set (reg:SI VL_REGNUM)
+ (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK")
+     (match_operand 1 "const_int_operand" "i")
+     (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL))
+   (set (reg:SI VTYPE_REGNUM)
+ (unspec:SI [(match_dup 1)
+     (match_dup 2)] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "vsetvli\tzero,%0,e%1,%m2"
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "<MODE>")
+   (set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
+   (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))])
+
+;; It's emit by vsetvl/vsetvlmax intrinsics with no side effects.
+;; Since we have many optmization passes from "expand" to "reload_completed",
+;; such pattern can allow us gain benefits of these optimizations.
+(define_insn_and_split "@th_vsetvl<mode>_no_side_effects"
+  [(set (match_operand:P 0 "register_operand" "=r")
+ (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK")
+    (match_operand 2 "const_int_operand" "i")
+    (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
+  "TARGET_XTHEADVECTOR"
+  "#"
+  "&& epilogue_completed"
+  [(parallel
+    [(set (match_dup 0)
+   (unspec:P [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VL_REGNUM)
+   (unspec:SI [(match_dup 1) (match_dup 2) (match_dup 3)] UNSPEC_VSETVL))
+     (set (reg:SI VTYPE_REGNUM)
+   (unspec:SI [(match_dup 2) (match_dup 3)] UNSPEC_VSETVL))])]
+  ""
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "SI")])
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 5f5f7b5b986..c0fc7a2441d 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -109,11 +109,11 @@
])
(define_mode_iterator VI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -128,11 +128,11 @@
;; allow the instruction and mode to be matched during combine et al.
(define_mode_iterator VF [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -140,11 +140,11 @@
(define_mode_iterator VF_ZVFHMIN [
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -271,16 +271,16 @@
])
(define_mode_iterator VEEWEXT2 [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -290,10 +290,10 @@
])
(define_mode_iterator VEEWEXT4 [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -311,59 +311,59 @@
])
(define_mode_iterator VEEWTRUNC2 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
   (RVVM4SI "TARGET_64BIT")
   (RVVM2SI "TARGET_64BIT")
   (RVVM1SI "TARGET_64BIT")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC4 [
-  RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM2HI "TARGET_64BIT")
   (RVVM1HI "TARGET_64BIT")
-  (RVVMF2HI "TARGET_64BIT")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
   (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
   (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEEWTRUNC8 [
   (RVVM1QI "TARGET_64BIT")
-  (RVVMF2QI "TARGET_64BIT")
-  (RVVMF4QI "TARGET_64BIT")
-  (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT")
])
(define_mode_iterator VEI16 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -452,11 +452,11 @@
])
(define_mode_iterator VFULLI [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V")
@@ -509,17 +509,17 @@
])
(define_mode_iterator VI_QH [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -560,11 +560,11 @@
])
(define_mode_iterator VI_QHS_NO_M8 [
-  RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -603,11 +603,11 @@
(define_mode_iterator VF_HS [
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -638,12 +638,12 @@
   (RVVM4HF "TARGET_ZVFH")
   (RVVM2HF "TARGET_ZVFH")
   (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH")
   (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH")
@@ -674,11 +674,11 @@
])
(define_mode_iterator V_VLSI_QHS [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)")
   (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)")
@@ -756,27 +756,27 @@
;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or
;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode.
(define_mode_iterator RATIO64 [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO32 [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64")
])
(define_mode_iterator RATIO16 [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64")
@@ -814,21 +814,21 @@
])
(define_mode_iterator RATIO64I [
-  (RVVMF8QI "TARGET_MIN_VLEN > 32")
-  (RVVMF4HI "TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO32I [
-  RVVMF4QI
-  RVVMF2HI
+  (RVVMF4QI "!TARGET_XTHEADVECTOR")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR")
   RVVM1SI
   (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
])
(define_mode_iterator RATIO16I [
-  RVVMF2QI
+  (RVVMF2QI "!TARGET_XTHEADVECTOR")
   RVVM1HI
   RVVM2SI
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -873,21 +873,21 @@
])
(define_mode_iterator V_FRACT [
-  RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32")
-  (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
])
(define_mode_iterator VWEXTI [
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -933,7 +933,7 @@
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -966,7 +966,7 @@
   (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
@@ -996,7 +996,7 @@
(define_mode_iterator VWCONVERTI [
   (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH")
-  (RVVMF2SI "TARGET_ZVFH")
+  (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32")
@@ -1045,7 +1045,7 @@
])
(define_mode_iterator VQEXTI [
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
   (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
@@ -1456,11 +1456,11 @@
;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")].
(define_mode_iterator VINDEXED [
-  RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32")
+  RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32")
+  RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
-  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+  RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32")
   (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
@@ -1468,12 +1468,12 @@
   (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT")
   (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH")
-  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT")
@@ -3173,11 +3173,11 @@
(define_mode_iterator V_VLS_F_CONVERT_SI [
   (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH")
-  (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
@@ -3290,12 +3290,12 @@
])
(define_mode_iterator V_VLS_F_CONVERT_DI [
-  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH")
-  (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32")
+  (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH")
+  (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32")
   (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32")
   (RVVM1SF "TARGET_VECTOR_ELEN_FP_32")
-  (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
   (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64")
   (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 036b2425f32..9941651341d 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -83,7 +83,7 @@
;; check. However, we need default value of SEW for vsetvl instruction since there
;; is no field for ratio in the vsetvl instruction encoding.
(define_attr "sew" ""
-  (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
+  (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\
  RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\
  RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\
  RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\
@@ -95,6 +95,18 @@
  V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\
  V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI")
(const_int 8)
+ (eq_attr "mode" "RVVMF16BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 16)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF32BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 32)
+      (const_int 8))
+ (eq_attr "mode" "RVVMF64BI")
+    (if_then_else (match_test "TARGET_XTHEADVECTOR")
+      (const_int 64)
+      (const_int 8))
(eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\
  RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\
  RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\
@@ -155,9 +167,9 @@
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2")
(eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1")
- (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2")
- (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4")
- (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8")
+ (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2")
+ (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4")
+ (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8")
(eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8")
(eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4")
(eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2")
@@ -428,6 +440,10 @@
  vislide1up,vislide1down,vfslide1up,vfslide1down,\
  vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
   (const_int INVALID_ATTRIBUTE)
+ (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\
+        vlsegdff,vssegtux,vlsegdox,vlsegdux")
+       (match_test "TARGET_XTHEADVECTOR"))
+    (const_int INVALID_ATTRIBUTE)
(eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1)
(eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2)
(eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4)
@@ -888,6 +904,8 @@
(symbol_ref "riscv_vector::FRM_DYN")]
(symbol_ref "riscv_vector::FRM_NONE")))
+(include "thead-vector.md")
+
;; -----------------------------------------------------------------
;; ---- Miscellaneous Operations
;; -----------------------------------------------------------------
@@ -1097,7 +1115,7 @@
(define_insn "*mov<mode>_whole"
   [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_WHOLE 1 "reg_or_mem_operand" "  m,vr,vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "@
    vl%m1re<sew>.v\t%0,%1
    vs%m1r.v\t%1,%0
@@ -1125,7 +1143,7 @@
(define_insn "*mov<mode>"
   [(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "vmv1r.v\t%0,%1"
   [(set_attr "type" "vmov")
    (set_attr "mode" "<MODE>")])
@@ -3680,7 +3698,7 @@
  (any_extend:VWEXTI
    (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand"   "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84,   vr,   vr"))
  (match_operand:VWEXTI 2 "vector_merge_operand"           " vu, vu,  0,  0, vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf2\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3701,7 +3719,7 @@
  (any_extend:VQEXTI
    (match_operand:<V_QUAD_TRUNC> 3 "register_operand"   "W43,W43,W43,W43,W86,W86,W86,W86,   vr,   vr"))
  (match_operand:VQEXTI 2 "vector_merge_operand"         " vu, vu,  0,  0, vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf4\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
@@ -3722,7 +3740,7 @@
  (any_extend:VOEXTI
    (match_operand:<V_OCT_TRUNC> 3 "register_operand"   "W87,W87,W87,W87,   vr,   vr"))
  (match_operand:VOEXTI 2 "vector_merge_operand"        " vu, vu,  0,  0,   vu,    0")))]
-  "TARGET_VECTOR"
+  "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
   "v<sz>ext.vf8\t%0,%3%p1"
   [(set_attr "type" "vext")
    (set_attr "mode" "<MODE>")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
index 2e0e12aa045..2eef9e1e1a8 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c
@@ -1,4 +1,4 @@
-/* { dg-do compile } */
+/* { dg-do compile { target { ! riscv_xtheadvector } } } */
/* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
void foo0 () {__rvv_bool64_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
index 7f13ff0ca56..70df6b1401c 100644
--- a/gcc/testsuite/lib/target-supports.exp
+++ b/gcc/testsuite/lib/target-supports.exp
@@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } {
     }]
}
+# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise.
+# Cache the result.
+
+proc check_effective_target_riscv_xtheadvector { } {
+    return [check_no_compiler_messages riscv_ext_xtheadvector assembly {
+       #ifndef __riscv_xtheadvector
+       #error "Not __riscv_xtheadvector"
+       #endif
+    }]
+}
+
+
# Return 1 if we can execute code when using dg-add-options riscv_v
proc check_effective_target_riscv_v_ok { } {
-- 
2.17.1
 
 
 
 
 
 
 
 
 
 

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2023-12-29  2:47 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-18  4:22 [PATCH v2 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
2023-11-18  4:26 ` [PATCH v2 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
2023-11-18 10:06   ` Kito Cheng
2023-11-18  4:28 ` [PATCH v2 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
2023-11-18 10:13   ` Kito Cheng
2023-11-18  4:29 ` [PATCH v2 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
2023-11-18  4:32 ` [PATCH v2 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
2023-11-18  4:34 ` [PATCH v2 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
2023-11-18  4:35 ` [PATCH v2 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
2023-11-18  4:37 ` [PATCH v2 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
2023-11-18  4:39 ` [PATCH v2 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
2023-12-20 12:20 ` [PATCH v3 0/6] RISC-V: Support " Jun Sha (Joshua)
2023-12-20 12:25   ` [PATCH v3 1/6] RISC-V: Refactor riscv-vector-builtins-bases.cc Jun Sha (Joshua)
2023-12-20 18:14     ` Jeff Law
2023-12-27  2:46       ` 回复:[PATCH " joshua
2023-12-29  1:44       ` joshua
2023-12-20 12:27   ` [PATCH v3 2/6] RISC-V: Split csr_operand in predicates.md for vector patterns Jun Sha (Joshua)
2023-12-20 18:16     ` Jeff Law
2023-12-27  2:49       ` 回复:[PATCH " joshua
2023-12-28 15:50         ` Jeff Law
2023-12-20 12:30   ` [PATCH v3 3/6] RISC-V: Introduce XTheadVector as a subset of V1.0.0 Jun Sha (Joshua)
2023-12-20 12:32   ` [PATCH v3 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector Jun Sha (Joshua)
2023-12-20 18:22     ` Jeff Law
2023-12-20 22:48       ` 钟居哲
2023-12-21  4:41         ` Jeff Law
2023-12-21  9:43           ` Kito Cheng
2023-12-25  6:25     ` [PATCH v4 " Jun Sha (Joshua)
2023-12-25  6:37       ` juzhe.zhong
2023-12-25  7:08         ` 回复:[PATCH " joshua
2023-12-25  7:09           ` juzhe.zhong
2023-12-25  8:14       ` [PATCH " Jun Sha (Joshua)
2023-12-25  8:18         ` juzhe.zhong
2023-12-20 12:34   ` [PATCH v3 5/6] RISC-V: Handle differences between XTheadvector and Vector Jun Sha (Joshua)
2023-12-20 14:00     ` 钟居哲
2023-12-20 14:24       ` 回复:[PATCH " joshua
2023-12-20 14:27         ` 钟居哲
2023-12-20 14:41           ` 回复:回复:[PATCH " joshua
2023-12-20 14:48             ` 回复:[PATCH " 钟居哲
2023-12-20 14:55             ` 钟居哲
2023-12-20 15:21               ` 回复:回复:[PATCH " joshua
2023-12-20 15:29                 ` 回复:[PATCH " 钟居哲
2023-12-25  6:29     ` [PATCH v4 " Jun Sha (Joshua)
2023-12-29  1:46       ` Jun Sha (Joshua)
2023-12-29  1:58         ` juzhe.zhong
2023-12-29  2:09           ` 回复:[PATCH " joshua
2023-12-29  2:11             ` Re:[PATCH " joshua
2023-12-29  2:14             ` 回复:[PATCH " juzhe.zhong
2023-12-29  2:17               ` Re:[PATCH " joshua
2023-12-29  2:22                 ` juzhe.zhong
2023-12-29  2:25                   ` Re:Re:[PATCH " joshua
2023-12-29  2:25                     ` Re:[PATCH " juzhe.zhong
2023-12-29  2:30                       ` joshua
2023-12-29  2:31                         ` juzhe.zhong
2023-12-29  2:47                         ` juzhe.zhong
2023-12-20 12:36   ` [PATCH v3 6/6] RISC-V: Add support for xtheadvector-specific intrinsics Jun Sha (Joshua)
2023-12-25  6:31     ` [PATCH v4 " Jun Sha (Joshua)
2023-12-29  1:49       ` Jun Sha (Joshua)
2023-12-20 23:04   ` [PATCH v3 0/6] RISC-V: Support XTheadVector extension 钟居哲
2023-12-22  3:33     ` 回复:[PATCH " joshua
2023-12-22  8:07       ` juzhe.zhong
2023-12-22 10:29         ` 回复:回复:[PATCH " joshua
2023-12-22 10:31           ` 回复:[PATCH " juzhe.zhong
2023-12-23  3:37             ` 回复:回复:[PATCH " joshua
2023-12-23 22:52               ` 回复:[PATCH " 钟居哲
2023-12-22 17:21         ` Jeff Law
2023-12-20 23:08   ` [PATCH " 钟居哲
2023-12-21  3:28     ` Jeff Law
2023-12-21  3:30       ` juzhe.zhong
2023-12-21  4:04         ` Jeff Law

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).