* [PATCH 0/9] RISC-V: Support XTheadVector extensions
@ 2023-11-17 8:19 Jun Sha (Joshua)
2023-11-17 8:52 ` [PATCH 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 8:19 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
This patch series presents gcc implementation of the XTheadVector
extension [1].
[1] https://github.com/T-head-Semi/thead-extension-spec/
Contributors:
Jun Sha (Joshua) <cooper.joshua@linux.alibaba.com>
Jin Ma <jinma@linux.alibaba.com>
RISC-V: minimal support for xtheadvector
RISC-V: Handle differences between xtheadvector and vector
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4)
RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part5)
RISC-V: Add support for xtheadvector-specific load/store intrinsics
RISC-V: Disable fractional type intrinsics for XTheadVector
---
gcc/common/config/riscv/riscv-common.cc | 10 +
gcc/config.gcc | 2 +-
gcc/config/riscv/riscv-c.cc | 8 +-
gcc/config/riscv/riscv-protos.h | 1 +
.../riscv/riscv-vector-builtins-bases.cc | 122 +++
.../riscv/riscv-vector-builtins-bases.h | 30 +
.../riscv/riscv-vector-builtins-functions.def | 2 +
.../riscv/riscv-vector-builtins-shapes.cc | 122 +++
.../riscv/riscv-vector-builtins-shapes.h | 2 +
.../riscv/riscv-vector-builtins-types.def | 120 +++
gcc/config/riscv/riscv-vector-builtins.cc | 300 ++++++-
gcc/config/riscv/riscv-vector-switch.def | 144 ++--
gcc/config/riscv/riscv.cc | 13 +-
gcc/config/riscv/riscv.opt | 2 +
gcc/config/riscv/riscv_th_vector.h | 49 ++
.../riscv/thead-vector-builtins-functions.def | 30 +
gcc/config/riscv/thead-vector.md | 235 ++++++
gcc/config/riscv/vector-iterators.md | 4 +
gcc/config/riscv/vector.md | 778 +++++++++---------
.../riscv/predef-__riscv_th_v_intrinsic.c | 11 +
.../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
.../gcc.target/riscv/rvv/fractional-type.c | 79 ++
.../gcc.target/riscv/rvv/xtheadvector.c | 13 +
.../rvv/xtheadvector/autovec/vadd-run-nofm.c | 4 +
.../riscv/rvv/xtheadvector/autovec/vadd-run.c | 81 ++
.../xtheadvector/autovec/vadd-rv32gcv-nofm.c | 10 +
.../rvv/xtheadvector/autovec/vadd-rv32gcv.c | 8 +
.../xtheadvector/autovec/vadd-rv64gcv-nofm.c | 10 +
.../rvv/xtheadvector/autovec/vadd-rv64gcv.c | 8 +
.../rvv/xtheadvector/autovec/vadd-template.h | 70 ++
.../rvv/xtheadvector/autovec/vadd-zvfh-run.c | 54 ++
.../riscv/rvv/xtheadvector/autovec/vand-run.c | 75 ++
.../rvv/xtheadvector/autovec/vand-rv32gcv.c | 7 +
.../rvv/xtheadvector/autovec/vand-rv64gcv.c | 7 +
.../rvv/xtheadvector/autovec/vand-template.h | 61 ++
.../rvv/xtheadvector/binop_vv_constraint-1.c | 68 ++
.../rvv/xtheadvector/binop_vv_constraint-3.c | 27 +
.../rvv/xtheadvector/binop_vv_constraint-4.c | 27 +
.../rvv/xtheadvector/binop_vv_constraint-5.c | 29 +
.../rvv/xtheadvector/binop_vv_constraint-6.c | 28 +
.../rvv/xtheadvector/binop_vv_constraint-7.c | 29 +
.../rvv/xtheadvector/binop_vx_constraint-1.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-10.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-11.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-12.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-13.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-14.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-15.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-16.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-17.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-18.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-19.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-2.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-20.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-21.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-22.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-23.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-24.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-25.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-26.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-27.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-28.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-29.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-3.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-30.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-31.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-32.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-33.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-34.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-35.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-36.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-37.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-38.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-39.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-4.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-40.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-41.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-42.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-43.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-44.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-45.c | 123 +++
.../rvv/xtheadvector/binop_vx_constraint-46.c | 72 ++
.../rvv/xtheadvector/binop_vx_constraint-47.c | 16 +
.../rvv/xtheadvector/binop_vx_constraint-48.c | 16 +
.../rvv/xtheadvector/binop_vx_constraint-49.c | 16 +
.../rvv/xtheadvector/binop_vx_constraint-5.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-50.c | 18 +
.../rvv/xtheadvector/binop_vx_constraint-6.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-7.c | 68 ++
.../rvv/xtheadvector/binop_vx_constraint-8.c | 73 ++
.../rvv/xtheadvector/binop_vx_constraint-9.c | 68 ++
.../rvv/xtheadvector/rvv-xtheadvector.exp | 41 +
.../rvv/xtheadvector/ternop_vv_constraint-1.c | 83 ++
.../rvv/xtheadvector/ternop_vv_constraint-2.c | 83 ++
.../rvv/xtheadvector/ternop_vv_constraint-3.c | 83 ++
.../rvv/xtheadvector/ternop_vv_constraint-4.c | 83 ++
.../rvv/xtheadvector/ternop_vv_constraint-5.c | 83 ++
.../rvv/xtheadvector/ternop_vv_constraint-6.c | 83 ++
.../rvv/xtheadvector/ternop_vx_constraint-1.c | 71 ++
.../rvv/xtheadvector/ternop_vx_constraint-2.c | 38 +
.../rvv/xtheadvector/ternop_vx_constraint-3.c | 125 +++
.../rvv/xtheadvector/ternop_vx_constraint-4.c | 123 +++
.../rvv/xtheadvector/ternop_vx_constraint-5.c | 123 +++
.../rvv/xtheadvector/ternop_vx_constraint-6.c | 130 +++
.../rvv/xtheadvector/ternop_vx_constraint-7.c | 130 +++
.../rvv/xtheadvector/ternop_vx_constraint-8.c | 71 ++
.../rvv/xtheadvector/ternop_vx_constraint-9.c | 71 ++
.../rvv/xtheadvector/unop_v_constraint-1.c | 68 ++
.../riscv/rvv/xtheadvector/vlb-vsb.c | 68 ++
.../riscv/rvv/xtheadvector/vlbu-vsb.c | 68 ++
.../riscv/rvv/xtheadvector/vlh-vsh.c | 68 ++
.../riscv/rvv/xtheadvector/vlhu-vsh.c | 68 ++
.../riscv/rvv/xtheadvector/vlw-vsw.c | 68 ++
.../riscv/rvv/xtheadvector/vlwu-vsw.c | 68 ++
114 files changed, 7455 insertions(+), 457 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/9] RISC-V: minimal support for xtheadvector
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
@ 2023-11-17 8:52 ` Jun Sha (Joshua)
2023-11-17 8:55 ` [PATCH 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 8:52 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
This patch is to introduce basic XTheadVector support
(march string parsing and a test for __riscv_xtheadvector)
according to https://github.com/T-head-Semi/thead-extension-spec/
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc
(riscv_subset_list::parse): : Add new vendor extension.
* config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):
Add test marco.
* config/riscv/riscv.opt: Add new mask.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/predef-__riscv_th_v_intrinsic.c: New test.
* gcc.target/riscv/rvv/xtheadvector.c: New test.
---
gcc/common/config/riscv/riscv-common.cc | 10 ++++++++++
gcc/config/riscv/riscv-c.cc | 4 ++++
gcc/config/riscv/riscv.opt | 2 ++
.../riscv/predef-__riscv_th_v_intrinsic.c | 11 +++++++++++
gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c | 13 +++++++++++++
5 files changed, 40 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
diff --git a/gcc/common/config/riscv/riscv-common.cc b/gcc/common/config/riscv/riscv-common.cc
index 526dbb7603b..914924171fd 100644
--- a/gcc/common/config/riscv/riscv-common.cc
+++ b/gcc/common/config/riscv/riscv-common.cc
@@ -75,6 +75,8 @@ static const riscv_implied_info_t riscv_implied_info[] =
{"v", "zvl128b"},
{"v", "zve64d"},
+ {"xtheadvector", "zvl128b"},
+ {"xtheadvector", "zve64d"},
{"zve32f", "f"},
{"zve64f", "f"},
@@ -325,6 +327,7 @@ static const struct riscv_ext_version riscv_ext_version_table[] =
{"xtheadmemidx", ISA_SPEC_CLASS_NONE, 1, 0},
{"xtheadmempair", ISA_SPEC_CLASS_NONE, 1, 0},
{"xtheadsync", ISA_SPEC_CLASS_NONE, 1, 0},
+ {"xtheadvector", ISA_SPEC_CLASS_NONE, 1, 0},
{"xventanacondops", ISA_SPEC_CLASS_NONE, 1, 0},
@@ -1495,6 +1498,10 @@ riscv_subset_list::parse (const char *arch, location_t loc)
error_at (loc, "%<-march=%s%>: z*inx conflicts with floating-point "
"extensions", arch);
+ if (subset_list->lookup ("v") && subset_list->lookup ("xtheadvector"))
+ error_at (loc, "%<-march=%s%>: xtheadvector conflicts with vector "
+ "extensions", arch);
+
/* 'H' hypervisor extension requires base ISA with 32 registers. */
if (subset_list->lookup ("e") && subset_list->lookup ("h"))
error_at (loc, "%<-march=%s%>: h extension requires i extension", arch);
@@ -1680,6 +1687,9 @@ static const riscv_ext_flag_table_t riscv_ext_flag_table[] =
{"xtheadmemidx", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMIDX},
{"xtheadmempair", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADMEMPAIR},
{"xtheadsync", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADSYNC},
+ {"xtheadvector", &gcc_options::x_riscv_xthead_subext, MASK_XTHEADVECTOR},
+ {"xtheadvector", &gcc_options::x_target_flags, MASK_FULL_V},
+ {"xtheadvector", &gcc_options::x_target_flags, MASK_VECTOR},
{"xventanacondops", &gcc_options::x_riscv_xventana_subext, MASK_XVENTANACONDOPS},
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index b7f9ba204f7..184fff905b2 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -137,6 +137,10 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
riscv_ext_version_value (0, 11));
}
+ if (TARGET_XTHEADVECTOR)
+ builtin_define_with_int_value ("__riscv_th_v_intrinsic",
+ riscv_ext_version_value (0, 11));
+
/* Define architecture extension test macros. */
builtin_define_with_int_value ("__riscv_arch_test", 1);
diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt
index 70d78151cee..72857aea352 100644
--- a/gcc/config/riscv/riscv.opt
+++ b/gcc/config/riscv/riscv.opt
@@ -438,6 +438,8 @@ Mask(XTHEADMEMPAIR) Var(riscv_xthead_subext)
Mask(XTHEADSYNC) Var(riscv_xthead_subext)
+Mask(XTHEADVECTOR) Var(riscv_xthead_subext)
+
TargetVariable
int riscv_xventana_subext
diff --git a/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c b/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
new file mode 100644
index 00000000000..1c764241db6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/predef-__riscv_th_v_intrinsic.c
@@ -0,0 +1,11 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64imafdcxtheadvector -mabi=lp64d" } */
+
+int main () {
+
+#if __riscv_th_v_intrinsic != 11000
+#error "__riscv_th_v_intrinsic"
+#endif
+
+ return 0;
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
new file mode 100644
index 00000000000..d52921e1314
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector.c
@@ -0,0 +1,13 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_xtheadvector" { target { rv32 } } } */
+/* { dg-options "-march=rv64gc_xtheadvector" { target { rv64 } } } */
+
+#ifndef __riscv_xtheadvector
+#error "Feature macro not defined"
+#endif
+
+int
+foo (int a)
+{
+ return a;
+}
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/9] RISC-V: Handle differences between xtheadvector and vector
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
2023-11-17 8:52 ` [PATCH 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
@ 2023-11-17 8:55 ` Jun Sha (Joshua)
2023-11-17 8:56 ` [PATCH 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 8:55 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
This patch is to handle the differences in instruction generation
between vector and xtheadvector, mainly adding th. prefix
to all xtheadvector instructions.
gcc/ChangeLog:
* config.gcc: Add header for XTheadVector intrinsics.
* config/riscv/riscv-c.cc (riscv_pragma_intrinsic):
Add XTheadVector.
* config/riscv/riscv.cc (riscv_print_operand):
Add new operand format directives.
(riscv_print_operand_punct_valid_p): Likewise.
* config/riscv/vector-iterators.md: Split any_int_unop
for not and neg.
* config/riscv/vector.md (@pred_<optab><mode>):
Add th. for xtheadvector instructions.
* config/riscv/riscv_th_vector.h: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector.
---
gcc/config.gcc | 2 +-
gcc/config/riscv/riscv-c.cc | 4 +-
gcc/config/riscv/riscv.cc | 11 +-
gcc/config/riscv/riscv_th_vector.h | 49 ++
gcc/config/riscv/vector-iterators.md | 4 +
gcc/config/riscv/vector.md | 777 +++++++++---------
.../gcc.target/riscv/rvv/base/pragma-1.c | 2 +-
7 files changed, 466 insertions(+), 383 deletions(-)
create mode 100644 gcc/config/riscv/riscv_th_vector.h
diff --git a/gcc/config.gcc b/gcc/config.gcc
index ba6d63e33ac..e0fc2b1a27c 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -548,7 +548,7 @@ riscv*)
extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o"
extra_objs="${extra_objs} thead.o"
d_target_objs="riscv-d.o"
- extra_headers="riscv_vector.h"
+ extra_headers="riscv_vector.h riscv_th_vector.h"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h"
;;
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index 184fff905b2..0a17d5f6656 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -194,8 +194,8 @@ riscv_pragma_intrinsic (cpp_reader *)
{
if (!TARGET_VECTOR)
{
- error ("%<#pragma riscv intrinsic%> option %qs needs 'V' extension "
- "enabled",
+ error ("%<#pragma riscv intrinsic%> option %qs needs 'V' or "
+ "'XTHEADVECTOR' extension enabled",
name);
return;
}
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index ecee7eb4727..754107cdaac 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5323,7 +5323,7 @@ riscv_get_v_regno_alignment (machine_mode mode)
static void
riscv_print_operand (FILE *file, rtx op, int letter)
{
- /* `~` does not take an operand so op will be null
+ /* `~` and '^' does not take an operand so op will be null
Check for before accessing op.
*/
if (letter == '~')
@@ -5332,6 +5332,13 @@ riscv_print_operand (FILE *file, rtx op, int letter)
fputc('w', file);
return;
}
+
+ if (letter == '^')
+ {
+ if (TARGET_XTHEADVECTOR)
+ fputs ("th.", file);
+ return;
+ }
machine_mode mode = GET_MODE (op);
enum rtx_code code = GET_CODE (op);
@@ -5584,7 +5591,7 @@ riscv_print_operand (FILE *file, rtx op, int letter)
static bool
riscv_print_operand_punct_valid_p (unsigned char code)
{
- return (code == '~');
+ return (code == '~' || code == '^');
}
/* Implement TARGET_PRINT_OPERAND_ADDRESS. */
diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h
new file mode 100644
index 00000000000..194652032bc
--- /dev/null
+++ b/gcc/config/riscv/riscv_th_vector.h
@@ -0,0 +1,49 @@
+/* RISC-V 'XTheadVector' Extension intrinsics include file.
+ Copyright (C) 2022-2023 Free Software Foundation, Inc.
+
+ This file is part of GCC.
+
+ GCC is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published
+ by the Free Software Foundation; either version 3, or (at your
+ option) any later version.
+
+ GCC is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
+ License for more details.
+
+ Under Section 7 of GPL version 3, you are granted additional
+ permissions described in the GCC Runtime Library Exception, version
+ 3.1, as published by the Free Software Foundation.
+
+ You should have received a copy of the GNU General Public License and
+ a copy of the GCC Runtime Library Exception along with this program;
+ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
+ <http://www.gnu.org/licenses/>. */
+
+#ifndef __RISCV_TH_VECTOR_H
+#define __RISCV_TH_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+#ifndef __riscv_xtheadvector
+#error "XTheadVector intrinsics require the xtheadvector extension."
+#else
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* NOTE: This implementation of riscv_vector.h is intentionally short. It does
+ not define the RVV types and intrinsic functions directly in C and C++
+ code, but instead uses the following pragma to tell GCC to insert the
+ necessary type and function definitions itself. The net effect is the
+ same, and the file is a complete implementation of riscv_vector.h. */
+#pragma riscv intrinsic "vector"
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+#endif // __riscv_xtheadvector
+#endif // __RISCV_TH_ECTOR_H
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index f04c7fe5491..4b1ba84750c 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3679,6 +3679,10 @@ (define_code_iterator any_int_binop [plus minus and ior xor ashift ashiftrt lshi
(define_code_iterator any_int_unop [neg not])
+(define_code_iterator neg_unop [neg])
+
+(define_code_iterator not_unop [not])
+
(define_code_iterator any_commutative_binop [plus and ior xor
smax umax smin umin mult
])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index d1499d330ff..2af237854f9 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -1099,9 +1099,9 @@ (define_insn "*mov<mode>_whole"
(match_operand:V_WHOLE 1 "reg_or_mem_operand" " m,vr,vr"))]
"TARGET_VECTOR"
"@
- vl%m1re<sew>.v\t%0,%1
- vs%m1r.v\t%1,%0
- vmv%m1r.v\t%0,%1"
+ * return TARGET_XTHEADVECTOR ? \"th.vl%m1re.v\t%0,%1\" : \"vl%m1re<sew>.v\t%0,%1\";
+ %^vs%m1r.v\t%1,%0
+ %^vmv%m1r.v\t%0,%1"
[(set_attr "type" "vldr,vstr,vmov")
(set_attr "mode" "<MODE>")])
@@ -1109,7 +1109,7 @@ (define_insn "*mov<mode>_fract"
[(set (match_operand:V_FRACT 0 "register_operand" "=vr")
(match_operand:V_FRACT 1 "register_operand" " vr"))]
"TARGET_VECTOR"
- "vmv1r.v\t%0,%1"
+ "%^vmv1r.v\t%0,%1"
[(set_attr "type" "vmov")
(set_attr "mode" "<MODE>")])
@@ -1126,7 +1126,7 @@ (define_insn "*mov<mode>"
[(set (match_operand:VB 0 "register_operand" "=vr")
(match_operand:VB 1 "register_operand" " vr"))]
"TARGET_VECTOR"
- "vmv1r.v\t%0,%1"
+ "%^vmv1r.v\t%0,%1"
[(set_attr "type" "vmov")
(set_attr "mode" "<MODE>")])
@@ -1135,7 +1135,7 @@ (define_expand "@mov<V_FRACT:mode><P:mode>_lra"
[(set (match_operand:V_FRACT 0 "reg_or_mem_operand")
(match_operand:V_FRACT 1 "reg_or_mem_operand"))
(clobber (match_scratch:P 2))])]
- "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+ "TARGET_VECTOR && (lra_in_progress || reload_completed)"
{})
(define_expand "@mov<VB:mode><P:mode>_lra"
@@ -1143,14 +1143,14 @@ (define_expand "@mov<VB:mode><P:mode>_lra"
[(set (match_operand:VB 0 "reg_or_mem_operand")
(match_operand:VB 1 "reg_or_mem_operand"))
(clobber (match_scratch:P 2))])]
- "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+ "TARGET_VECTOR && (lra_in_progress || reload_completed)"
{})
(define_insn_and_split "*mov<V_FRACT:mode><P:mode>_lra"
[(set (match_operand:V_FRACT 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:V_FRACT 1 "reg_or_mem_operand" " m,vr,vr"))
(clobber (match_scratch:P 2 "=&r,&r,X"))]
- "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+ "TARGET_VECTOR && (lra_in_progress || reload_completed)"
"#"
"&& reload_completed"
[(const_int 0)]
@@ -1172,7 +1172,7 @@ (define_insn_and_split "*mov<VB:mode><P:mode>_lra"
[(set (match_operand:VB 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:VB 1 "reg_or_mem_operand" " m,vr,vr"))
(clobber (match_scratch:P 2 "=&r,&r,X"))]
- "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+ "TARGET_VECTOR && (lra_in_progress || reload_completed)"
"#"
"&& reload_completed"
[(const_int 0)]
@@ -1258,7 +1258,7 @@ (define_insn_and_split "*mov<mode>"
"@
#
#
- vmv%m1r.v\t%0,%1"
+ %^vmv%m1r.v\t%0,%1"
"&& reload_completed
&& (!register_operand (operands[0], <MODE>mode)
|| !register_operand (operands[1], <MODE>mode))"
@@ -1286,14 +1286,14 @@ (define_expand "@mov<VLS_AVL_REG:mode><P:mode>_lra"
[(set (match_operand:VLS_AVL_REG 0 "reg_or_mem_operand")
(match_operand:VLS_AVL_REG 1 "reg_or_mem_operand"))
(clobber (match_scratch:P 2))])]
- "TARGET_VECTOR && (lra_in_progress || reload_completed)"
+ "TARGET_VECTOR && (lra_in_progress || reload_completed)"
{})
(define_insn_and_split "*mov<VLS_AVL_REG:mode><P:mode>_lra"
[(set (match_operand:VLS_AVL_REG 0 "reg_or_mem_operand" "=vr, m,vr")
(match_operand:VLS_AVL_REG 1 "reg_or_mem_operand" " m,vr,vr"))
(clobber (match_scratch:P 2 "=&r,&r,X"))]
- "TARGET_VECTOR && (lra_in_progress || reload_completed)
+ "TARGET_VECTOR && (lra_in_progress || reload_completed)
&& (register_operand (operands[0], <VLS_AVL_REG:MODE>mode)
|| register_operand (operands[1], <VLS_AVL_REG:MODE>mode))"
"#"
@@ -1322,7 +1322,7 @@ (define_insn "*mov<mode>_vls"
[(set (match_operand:VLS 0 "register_operand" "=vr")
(match_operand:VLS 1 "register_operand" " vr"))]
"TARGET_VECTOR"
- "vmv%m1r.v\t%0,%1"
+ "%^vmv%m1r.v\t%0,%1"
[(set_attr "type" "vmov")
(set_attr "mode" "<MODE>")])
@@ -1330,7 +1330,7 @@ (define_insn "*mov<mode>_vls"
[(set (match_operand:VLSB 0 "register_operand" "=vr")
(match_operand:VLSB 1 "register_operand" " vr"))]
"TARGET_VECTOR"
- "vmv1r.v\t%0,%1"
+ "%^vmv1r.v\t%0,%1"
[(set_attr "type" "vmov")
(set_attr "mode" "<MODE>")])
@@ -1359,7 +1359,7 @@ (define_expand "movmisalign<mode>"
(define_expand "movmisalign<mode>"
[(set (match_operand:V 0 "nonimmediate_operand")
(match_operand:V 1 "general_operand"))]
- "TARGET_VECTOR && TARGET_VECTOR_MISALIGN_SUPPORTED"
+ "TARGET_VECTOR && TARGET_VECTOR_MISALIGN_SUPPORTED"
{
emit_move_insn (operands[0], operands[1]);
DONE;
@@ -1396,7 +1396,7 @@ (define_insn_and_split "*vec_duplicate<mode>"
[(set (match_operand:V_VLS 0 "register_operand")
(vec_duplicate:V_VLS
(match_operand:<VEL> 1 "direct_broadcast_operand")))]
- "TARGET_VECTOR && can_create_pseudo_p ()"
+ "TARGET_VECTOR && can_create_pseudo_p ()"
"#"
"&& 1"
[(const_int 0)]
@@ -1530,7 +1530,7 @@ (define_insn "@vsetvl<mode>"
(match_dup 4)
(match_dup 5)] UNSPEC_VSETVL))]
"TARGET_VECTOR"
- "vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
+ "%^vset%i1vli\t%0,%1,e%2,%m3,t%p4,m%p5"
[(set_attr "type" "vsetvl")
(set_attr "mode" "<MODE>")
(set (attr "sew") (symbol_ref "INTVAL (operands[2])"))
@@ -1548,7 +1548,7 @@ (define_insn "vsetvl_vtype_change_only"
(match_operand 2 "const_int_operand" "i")
(match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL))]
"TARGET_VECTOR"
- "vsetvli\tzero,zero,e%0,%m1,t%p2,m%p3"
+ "%^vsetvli\tzero,zero,e%0,%m1,t%p2,m%p3"
[(set_attr "type" "vsetvl")
(set_attr "mode" "SI")
(set (attr "sew") (symbol_ref "INTVAL (operands[0])"))
@@ -1570,7 +1570,7 @@ (define_insn "@vsetvl_discard_result<mode>"
(match_operand 3 "const_int_operand" "i")
(match_operand 4 "const_int_operand" "i")] UNSPEC_VSETVL))]
"TARGET_VECTOR"
- "vset%i0vli\tzero,%0,e%1,%m2,t%p3,m%p4"
+ "%^vset%i0vli\tzero,%0,e%1,%m2,t%p3,m%p4"
[(set_attr "type" "vsetvl")
(set_attr "mode" "<MODE>")
(set (attr "sew") (symbol_ref "INTVAL (operands[1])"))
@@ -1720,12 +1720,12 @@ (define_insn_and_split "*pred_mov<mode>"
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[3], <MODE>mode)))"
"@
- vle<sew>.v\t%0,%3%p1
- vle<sew>.v\t%0,%3
- vle<sew>.v\t%0,%3,%1.t
- vse<sew>.v\t%3,%0%p1
- vmv.v.v\t%0,%3
- vmv.v.v\t%0,%3"
+ * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3%p1\" : \"vle<sew>.v\t%0,%3%p1\";
+ * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3\" : \"vle<sew>.v\t%0,%3\";
+ * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3,%1.t\" : \"vle<sew>.v\t%0,%3,%1.t\";
+ * return TARGET_XTHEADVECTOR ? \"th.vse.v\t%3,%0%p1\" : \"vse<sew>.v\t%3,%0%p1\";
+ %^vmv.v.v\t%0,%3
+ %^vmv.v.v\t%0,%3"
"&& register_operand (operands[0], <MODE>mode)
&& register_operand (operands[3], <MODE>mode)
&& satisfies_constraint_vu (operands[2])
@@ -1749,7 +1749,7 @@ (define_insn "@pred_store<mode>"
(match_operand:V 2 "register_operand" " vr")
(match_dup 0)))]
"TARGET_VECTOR"
- "vse<sew>.v\t%2,%0%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vse.v\t%2,%0%p1" : "vse<sew>.v\t%2,%0%p1"; }
[(set_attr "type" "vste")
(set_attr "mode" "<MODE>")
(set (attr "avl_type_idx") (const_int 4))
@@ -1773,11 +1773,11 @@ (define_insn_and_split "@pred_mov<mode>"
(match_operand:VB_VLS 2 "vector_undef_operand" " vu, vu, vu, vu, vu")))]
"TARGET_VECTOR"
"@
- vlm.v\t%0,%3
- vsm.v\t%3,%0
- vmmv.m\t%0,%3
- vmclr.m\t%0
- vmset.m\t%0"
+ %^vlm.v\t%0,%3
+ %^vsm.v\t%3,%0
+ %^vmmv.m\t%0,%3
+ %^vmclr.m\t%0
+ %^vmset.m\t%0"
"&& register_operand (operands[0], <MODE>mode)
&& register_operand (operands[3], <MODE>mode)
&& INTVAL (operands[5]) == riscv_vector::VLMAX"
@@ -1800,7 +1800,7 @@ (define_insn "@pred_store<mode>"
(match_operand:VB 2 "register_operand" " vr")
(match_dup 0)))]
"TARGET_VECTOR"
- "vsm.v\t%2,%0"
+ "%^vsm.v\t%2,%0"
[(set_attr "type" "vstm")
(set_attr "mode" "<MODE>")
(set (attr "avl_type_idx") (const_int 4))
@@ -1821,7 +1821,7 @@ (define_insn "@pred_merge<mode>"
(match_operand:<VM> 4 "register_operand" " vm,vm,vm,vm"))
(match_operand:V_VLS 1 "vector_merge_operand" " vu, 0,vu, 0")))]
"TARGET_VECTOR"
- "vmerge.v%o3m\t%0,%2,%v3,%4"
+ "%^vmerge.v%o3m\t%0,%2,%v3,%4"
[(set_attr "type" "vimerge")
(set_attr "mode" "<MODE>")])
@@ -1841,7 +1841,7 @@ (define_insn "@pred_merge<mode>_scalar"
(match_operand:<VM> 4 "register_operand" " vm,vm"))
(match_operand:V_VLSI_QHS 1 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vmerge.vxm\t%0,%2,%3,%4"
+ "%^vmerge.vxm\t%0,%2,%3,%4"
[(set_attr "type" "vimerge")
(set_attr "mode" "<MODE>")])
@@ -1893,7 +1893,7 @@ (define_insn "*pred_merge<mode>_scalar"
(match_operand:<VM> 4 "register_operand" " vm,vm"))
(match_operand:V_VLSI_D 1 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vmerge.vxm\t%0,%2,%3,%4"
+ "%^vmerge.vxm\t%0,%2,%3,%4"
[(set_attr "type" "vimerge")
(set_attr "mode" "<MODE>")])
@@ -1914,7 +1914,7 @@ (define_insn "*pred_merge<mode>_extended_scalar"
(match_operand:<VM> 4 "register_operand" " vm,vm"))
(match_operand:V_VLSI_D 1 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vmerge.vxm\t%0,%2,%3,%4"
+ "%^vmerge.vxm\t%0,%2,%3,%4"
[(set_attr "type" "vimerge")
(set_attr "mode" "<MODE>")])
@@ -2004,14 +2004,14 @@ (define_insn_and_split "*pred_broadcast<mode>"
(match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
"@
- vmv.v.x\t%0,%3
- vmv.v.x\t%0,%3
- vlse<sew>.v\t%0,%3,zero,%1.t
- vlse<sew>.v\t%0,%3,zero,%1.t
- vlse<sew>.v\t%0,%3,zero
- vlse<sew>.v\t%0,%3,zero
- vmv.s.x\t%0,%3
- vmv.s.x\t%0,%3"
+ %^vmv.v.x\t%0,%3
+ %^vmv.v.x\t%0,%3
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+ %^vmv.s.x\t%0,%3
+ %^vmv.s.x\t%0,%3"
"(register_operand (operands[3], <VEL>mode)
|| CONST_POLY_INT_P (operands[3]))
&& GET_MODE_BITSIZE (<VEL>mode) > GET_MODE_BITSIZE (Pmode)"
@@ -2065,14 +2065,14 @@ (define_insn "*pred_broadcast<mode>"
(match_operand:V_VLSF_ZVFHMIN 2 "vector_merge_operand" "vu, 0, vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
"@
- vfmv.v.f\t%0,%3
- vfmv.v.f\t%0,%3
- vlse<sew>.v\t%0,%3,zero,%1.t
- vlse<sew>.v\t%0,%3,zero,%1.t
- vlse<sew>.v\t%0,%3,zero
- vlse<sew>.v\t%0,%3,zero
- vfmv.s.f\t%0,%3
- vfmv.s.f\t%0,%3"
+ %^vfmv.v.f\t%0,%3
+ %^vfmv.v.f\t%0,%3
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero,%1.t\" : \"vlse<sew>.v\t%0,%3,zero,%1.t\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,zero\" : \"vlse<sew>.v\t%0,%3,zero\";
+ %^vfmv.s.f\t%0,%3
+ %^vfmv.s.f\t%0,%3"
[(set_attr "type" "vfmov,vfmov,vlds,vlds,vlds,vlds,vfmovfv,vfmovfv")
(set_attr "mode" "<MODE>")])
@@ -2093,10 +2093,10 @@ (define_insn "*pred_broadcast<mode>_extended_scalar"
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
"@
- vmv.v.x\t%0,%3
- vmv.v.x\t%0,%3
- vmv.s.x\t%0,%3
- vmv.s.x\t%0,%3"
+ %^vmv.v.x\t%0,%3
+ %^vmv.v.x\t%0,%3
+ %^vmv.s.x\t%0,%3
+ %^vmv.s.x\t%0,%3"
[(set_attr "type" "vimov,vimov,vimovxv,vimovxv")
(set_attr "mode" "<MODE>")])
@@ -2114,7 +2114,7 @@ (define_insn "*pred_broadcast<mode>_zero"
(match_operand:V_VLS 3 "vector_const_0_operand" "Wc0, Wc0")
(match_operand:V_VLS 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vmv.s.x\t%0,zero"
+ "%^vmv.s.x\t%0,zero"
[(set_attr "type" "vimovxv,vimovxv")
(set_attr "mode" "<MODE>")])
@@ -2134,7 +2134,7 @@ (define_insn "*pred_broadcast<mode>_imm"
(match_operand:V_VLS 3 "vector_const_int_or_double_0_operand" "viWc0, viWc0")
(match_operand:V_VLS 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vmv.v.i\t%0,%v3"
+ "%^vmv.v.i\t%0,%v3"
[(set_attr "type" "vimov,vimov")
(set_attr "mode" "<MODE>")])
@@ -2162,12 +2162,12 @@ (define_insn "@pred_strided_load<mode>"
(match_operand:V 2 "vector_merge_operand" " 0, vu, vu, 0, vu, vu")))]
"TARGET_VECTOR"
"@
- vlse<sew>.v\t%0,%3,%z4%p1
- vlse<sew>.v\t%0,%3,%z4
- vlse<sew>.v\t%0,%3,%z4,%1.t
- vle<sew>.v\t%0,%3%p1
- vle<sew>.v\t%0,%3
- vle<sew>.v\t%0,%3,%1.t"
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,%z4%p1\" : \"vlse<sew>.v\t%0,%3,%z4%p1\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,%z4\" : \"vlse<sew>.v\t%0,%3,%z4\";
+ * return TARGET_XTHEADVECTOR ? \"th.vlse.v\t%0,%3,%z4,%1.t\" : \"vlse<sew>.v\t%0,%3,%z4,%1.t\";
+ * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3%p1\" : \"vle<sew>.v\t%0,%3%p1\";
+ * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3\" : \"vle<sew>.v\t%0,%3\";
+ * return TARGET_XTHEADVECTOR ? \"th.vle.v\t%0,%3,%1.t\" : \"vle<sew>.v\t%0,%3,%1.t\";"
[(set_attr "type" "vlds")
(set_attr "mode" "<MODE>")])
@@ -2186,8 +2186,8 @@ (define_insn "@pred_strided_store<mode>"
(match_dup 0)))]
"TARGET_VECTOR"
"@
- vsse<sew>.v\t%3,%0,%z2%p1
- vse<sew>.v\t%3,%0%p1"
+ * return TARGET_XTHEADVECTOR ? \"th.vsse.v\t%3,%0,%z2%p1\" : \"vsse<sew>.v\t%3,%0,%z2%p1\";
+ * return TARGET_XTHEADVECTOR ? \"th.vse.v\t%3,%0%p1\" : \"vse<sew>.v\t%3,%0%p1\";"
[(set_attr "type" "vsts")
(set_attr "mode" "<MODE>")
(set (attr "avl_type_idx") (const_int 5))])
@@ -2217,7 +2217,7 @@ (define_insn "@pred_indexed_<order>load<mode>_same_eew"
(match_operand:<VINDEX> 4 "register_operand" " vr, vr,vr, vr")] ORDER)
(match_operand:V 2 "vector_merge_operand" " vu, vu, 0, 0")))]
"TARGET_VECTOR"
- "vl<order>xei<sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxe.v\t%0,(%z3),%4%p1" : "vl<order>xei<sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vld<order>x")
(set_attr "mode" "<MODE>")])
@@ -2498,18 +2498,18 @@ (define_insn "@pred_<optab><mode>"
(match_operand:V_VLSI 2 "vector_merge_operand" "vu,0,vu,0,vu,0,vu,0,vu,0,vu,0")))]
"TARGET_VECTOR"
"@
- v<insn>.vv\t%0,%3,%4%p1
- v<insn>.vv\t%0,%3,%4%p1
- v<insn>.vv\t%0,%3,%4%p1
- v<insn>.vv\t%0,%3,%4%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
- v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
- v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
- v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1"
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
+ %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
+ %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1
+ %^v<binop_reverse_vi_variant_insn>\t%0,<binop_reverse_vi_variant_op>%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2533,7 +2533,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand 4 "pmode_reg_or_uimm5_operand" " r, r, r, r, K, K, K, K"))
(match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0,vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.v%o4\t%0,%3,%4%p1"
+ "%^v<insn>.v%o4\t%0,%3,%4%p1"
[(set_attr "type" "vshift")
(set_attr "mode" "<MODE>")])
@@ -2555,7 +2555,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:V_VLSI_QHS 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_QHS 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%z4%p1"
+ "%^v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2576,7 +2576,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:<VEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ")))
(match_operand:V_VLSI_QHS 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%z4%p1"
+ "%^v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2597,7 +2597,7 @@ (define_insn "@pred_sub<mode>_reverse_scalar"
(match_operand:V_VLSI_QHS 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_QHS 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vrsub.vx\t%0,%3,%z4%p1"
+ "%^vrsub.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vialu")
(set_attr "mode" "<MODE>")])
@@ -2653,7 +2653,7 @@ (define_insn "*pred_<optab><mode>_scalar"
(match_operand:V_VLSI_D 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%z4%p1"
+ "%^v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2675,7 +2675,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
(match_operand:V_VLSI_D 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%z4%p1"
+ "%^v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2729,7 +2729,7 @@ (define_insn "*pred_<optab><mode>_scalar"
(match_operand:<VEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ")))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%z4%p1"
+ "%^v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2751,7 +2751,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
(match_operand:<VSUBEL> 4 "reg_or_0_operand" "rJ,rJ, rJ, rJ"))))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%z4%p1"
+ "%^v<insn>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -2805,7 +2805,7 @@ (define_insn "*pred_sub<mode>_reverse_scalar"
(match_operand:V_VLSI_D 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vrsub.vx\t%0,%3,%z4%p1"
+ "%^vrsub.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vialu")
(set_attr "mode" "<MODE>")])
@@ -2827,7 +2827,7 @@ (define_insn "*pred_sub<mode>_extended_reverse_scalar"
(match_operand:V_VLSI_D 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vrsub.vx\t%0,%3,%z4%p1"
+ "%^vrsub.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vialu")
(set_attr "mode" "<MODE>")])
@@ -2848,7 +2848,7 @@ (define_insn "@pred_mulh<v_su><mode>"
(match_operand:VFULLI 4 "register_operand" "vr,vr, vr, vr")] VMULH)
(match_operand:VFULLI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vmulh<v_su>.vv\t%0,%3,%4%p1"
+ "%^vmulh<v_su>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vimul")
(set_attr "mode" "<MODE>")])
@@ -2869,7 +2869,7 @@ (define_insn "@pred_mulh<v_su><mode>_scalar"
(match_operand:VI_QHS 3 "register_operand" "vr,vr, vr, vr")] VMULH)
(match_operand:VI_QHS 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vmulh<v_su>.vx\t%0,%3,%z4%p1"
+ "%^vmulh<v_su>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vimul")
(set_attr "mode" "<MODE>")])
@@ -2923,7 +2923,7 @@ (define_insn "*pred_mulh<v_su><mode>_scalar"
(match_operand:VFULLI_D 3 "register_operand" "vr,vr, vr, vr")] VMULH)
(match_operand:VFULLI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vmulh<v_su>.vx\t%0,%3,%z4%p1"
+ "%^vmulh<v_su>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vimul")
(set_attr "mode" "<MODE>")])
@@ -2945,7 +2945,7 @@ (define_insn "*pred_mulh<v_su><mode>_extended_scalar"
(match_operand:VFULLI_D 3 "register_operand" "vr,vr, vr, vr")] VMULH)
(match_operand:VFULLI_D 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vmulh<v_su>.vx\t%0,%3,%z4%p1"
+ "%^vmulh<v_su>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vimul")
(set_attr "mode" "<MODE>")])
@@ -2966,7 +2966,7 @@ (define_insn "@pred_adc<mode>"
(match_operand:<VM> 4 "register_operand" "vm,vm,vm,vm")] UNSPEC_VADC)
(match_operand:VI 1 "vector_merge_operand" "vu, 0,vu, 0")))]
"TARGET_VECTOR"
- "vadc.v%o3m\t%0,%2,%v3,%4"
+ "%^vadc.v%o3m\t%0,%2,%v3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -2990,7 +2990,7 @@ (define_insn "@pred_sbc<mode>"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VSBC)
(match_operand:VI 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vsbc.vvm\t%0,%2,%3,%4"
+ "%^vsbc.vvm\t%0,%2,%3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3015,7 +3015,7 @@ (define_insn "@pred_adc<mode>_scalar"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VADC)
(match_operand:VI_QHS 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vadc.vxm\t%0,%2,%3,%4"
+ "%^vadc.vxm\t%0,%2,%3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3040,7 +3040,7 @@ (define_insn "@pred_sbc<mode>_scalar"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VSBC)
(match_operand:VI_QHS 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vsbc.vxm\t%0,%2,%z3,%4"
+ "%^vsbc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3098,7 +3098,7 @@ (define_insn "*pred_adc<mode>_scalar"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VADC)
(match_operand:VI_D 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vadc.vxm\t%0,%2,%z3,%4"
+ "%^vadc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3124,7 +3124,7 @@ (define_insn "*pred_adc<mode>_extended_scalar"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VADC)
(match_operand:VI_D 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vadc.vxm\t%0,%2,%z3,%4"
+ "%^vadc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3182,7 +3182,7 @@ (define_insn "*pred_sbc<mode>_scalar"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VSBC)
(match_operand:VI_D 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vsbc.vxm\t%0,%2,%z3,%4"
+ "%^vsbc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3208,7 +3208,7 @@ (define_insn "*pred_sbc<mode>_extended_scalar"
(match_operand:<VM> 4 "register_operand" "vm,vm")] UNSPEC_VSBC)
(match_operand:VI_D 1 "vector_merge_operand" "vu, 0")))]
"TARGET_VECTOR"
- "vsbc.vxm\t%0,%2,%z3,%4"
+ "%^vsbc.vxm\t%0,%2,%z3,%4"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -3229,7 +3229,7 @@ (define_insn "@pred_madc<mode>"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
"TARGET_VECTOR"
- "vmadc.v%o2m\t%0,%1,%v2,%3"
+ "%^vmadc.v%o2m\t%0,%1,%v2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3248,7 +3248,7 @@ (define_insn "@pred_msbc<mode>"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
"TARGET_VECTOR"
- "vmsbc.vvm\t%0,%1,%2,%3"
+ "%^vmsbc.vvm\t%0,%1,%2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3268,7 +3268,7 @@ (define_insn "@pred_madc<mode>_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
"TARGET_VECTOR"
- "vmadc.vxm\t%0,%1,%2,%3"
+ "%^vmadc.vxm\t%0,%1,%2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3288,7 +3288,7 @@ (define_insn "@pred_msbc<mode>_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
"TARGET_VECTOR"
- "vmsbc.vxm\t%0,%1,%z2,%3"
+ "%^vmsbc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3337,7 +3337,7 @@ (define_insn "*pred_madc<mode>_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
"TARGET_VECTOR"
- "vmadc.vxm\t%0,%1,%z2,%3"
+ "%^vmadc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3358,7 +3358,7 @@ (define_insn "*pred_madc<mode>_extended_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMADC))]
"TARGET_VECTOR"
- "vmadc.vxm\t%0,%1,%z2,%3"
+ "%^vmadc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3407,7 +3407,7 @@ (define_insn "*pred_msbc<mode>_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
"TARGET_VECTOR"
- "vmsbc.vxm\t%0,%1,%z2,%3"
+ "%^vmsbc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3428,7 +3428,7 @@ (define_insn "*pred_msbc<mode>_extended_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_VMSBC))]
"TARGET_VECTOR"
- "vmsbc.vxm\t%0,%1,%z2,%3"
+ "%^vmsbc.vxm\t%0,%1,%z2,%3"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3446,7 +3446,7 @@ (define_insn "@pred_madc<mode>_overflow"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmadc.v%o2\t%0,%1,%v2"
+ "%^vmadc.v%o2\t%0,%1,%v2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3464,7 +3464,7 @@ (define_insn "@pred_msbc<mode>_overflow"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmsbc.vv\t%0,%1,%2"
+ "%^vmsbc.vv\t%0,%1,%2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3483,7 +3483,7 @@ (define_insn "@pred_madc<mode>_overflow_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmadc.vx\t%0,%1,%z2"
+ "%^vmadc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3502,7 +3502,7 @@ (define_insn "@pred_msbc<mode>_overflow_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmsbc.vx\t%0,%1,%z2"
+ "%^vmsbc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3549,7 +3549,7 @@ (define_insn "*pred_madc<mode>_overflow_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmadc.vx\t%0,%1,%z2"
+ "%^vmadc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3569,7 +3569,7 @@ (define_insn "*pred_madc<mode>_overflow_extended_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmadc.vx\t%0,%1,%z2"
+ "%^vmadc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3616,7 +3616,7 @@ (define_insn "*pred_msbc<mode>_overflow_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmsbc.vx\t%0,%1,%z2"
+ "%^vmsbc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3636,7 +3636,7 @@ (define_insn "*pred_msbc<mode>_overflow_extended_scalar"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)] UNSPEC_OVERFLOW))]
"TARGET_VECTOR"
- "vmsbc.vx\t%0,%1,%z2"
+ "%^vmsbc.vx\t%0,%1,%z2"
[(set_attr "type" "vicalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "3")
@@ -3660,11 +3660,34 @@ (define_insn "@pred_<optab><mode>"
(match_operand 7 "const_int_operand" " i, i, i, i")
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
- (any_int_unop:V_VLSI
+ (not_unop:V_VLSI
(match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
(match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.v\t%0,%3%p1"
+ "%^vnot.v\t%0,%3%p1"
+ [(set_attr "type" "vialu")
+ (set_attr "mode" "<MODE>")
+ (set_attr "vl_op_idx" "4")
+ (set (attr "ta") (symbol_ref "riscv_vector::get_ta (operands[5])"))
+ (set (attr "ma") (symbol_ref "riscv_vector::get_ma (operands[6])"))
+ (set (attr "avl_type_idx") (const_int 7))])
+
+(define_insn "@pred_<optab><mode>"
+ [(set (match_operand:V_VLSI 0 "register_operand" "=vd,vd, vr, vr")
+ (if_then_else:V_VLSI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vm,vm,Wc1,Wc1")
+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (neg_unop:V_VLSI
+ (match_operand:V_VLSI 3 "register_operand" "vr,vr, vr, vr"))
+ (match_operand:V_VLSI 2 "vector_merge_operand" "vu, 0, vu, 0")))]
+ "TARGET_VECTOR"
+ { return TARGET_XTHEADVECTOR ? "th.vrsub.vx\t%0,%3,x0%p1" : "vneg.v\t%0,%3%p1"; }
[(set_attr "type" "vialu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -3696,7 +3719,7 @@ (define_insn "@pred_<optab><mode>_vf2"
(any_extend:VWEXTI
(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr, vr"))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"v<sz>ext.vf2\t%0,%3%p1"
[(set_attr "type" "vext")
(set_attr "mode" "<MODE>")])
@@ -3716,7 +3739,7 @@ (define_insn "@pred_<optab><mode>_vf4"
(any_extend:VQEXTI
(match_operand:<V_QUAD_TRUNC> 3 "register_operand" " vr, vr"))
(match_operand:VQEXTI 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"v<sz>ext.vf4\t%0,%3%p1"
[(set_attr "type" "vext")
(set_attr "mode" "<MODE>")])
@@ -3736,7 +3759,7 @@ (define_insn "@pred_<optab><mode>_vf8"
(any_extend:VOEXTI
(match_operand:<V_OCT_TRUNC> 3 "register_operand" " vr, vr"))
(match_operand:VOEXTI 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"v<sz>ext.vf8\t%0,%3%p1"
[(set_attr "type" "vext")
(set_attr "mode" "<MODE>")])
@@ -3760,7 +3783,7 @@ (define_insn "@pred_dual_widen_<any_widen_binop:optab><any_extend:su><mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vw<any_widen_binop:insn><any_extend:u>.vv\t%0,%3,%4%p1"
+ "%^vw<any_widen_binop:insn><any_extend:u>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vi<widen_binop_insn_type>")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3783,7 +3806,7 @@ (define_insn "@pred_dual_widen_<any_widen_binop:optab><any_extend:su><mode>_scal
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ"))))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vw<any_widen_binop:insn><any_extend:u>.vx\t%0,%3,%z4%p1"
+ "%^vw<any_widen_binop:insn><any_extend:u>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vi<widen_binop_insn_type>")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3804,7 +3827,7 @@ (define_insn "@pred_single_widen_sub<any_extend:su><mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vwsub<any_extend:u>.wv\t%0,%3,%4%p1"
+ "%^vwsub<any_extend:u>.wv\t%0,%3,%4%p1"
[(set_attr "type" "viwalu")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3825,7 +3848,7 @@ (define_insn "@pred_single_widen_add<any_extend:su><mode>"
(match_operand:VWEXTI 3 "register_operand" " vr, vr"))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vwadd<any_extend:u>.wv\t%0,%3,%4%p1"
+ "%^vwadd<any_extend:u>.wv\t%0,%3,%4%p1"
[(set_attr "type" "viwalu")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3847,7 +3870,7 @@ (define_insn "@pred_single_widen_<plus_minus:optab><any_extend:su><mode>_scalar"
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ"))))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vw<plus_minus:insn><any_extend:u>.wx\t%0,%3,%z4%p1"
+ "%^vw<plus_minus:insn><any_extend:u>.wx\t%0,%3,%z4%p1"
[(set_attr "type" "vi<widen_binop_insn_type>")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3869,7 +3892,7 @@ (define_insn "@pred_widen_mulsu<mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vwmulsu.vv\t%0,%3,%4%p1"
+ "%^vwmulsu.vv\t%0,%3,%4%p1"
[(set_attr "type" "viwmul")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3892,7 +3915,7 @@ (define_insn "@pred_widen_mulsu<mode>_scalar"
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ"))))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vwmulsu.vx\t%0,%3,%z4%p1"
+ "%^vwmulsu.vx\t%0,%3,%z4%p1"
[(set_attr "type" "viwmul")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3915,7 +3938,7 @@ (define_insn "@pred_<optab><mode>"
(reg:<VEL> X0_REGNUM)))
(match_operand:VWEXTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vwcvt<u>.x.x.v\t%0,%3%p1"
+ "%^vwcvt<u>.x.x.v\t%0,%3%p1"
[(set_attr "type" "viwalu")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set_attr "vl_op_idx" "4")
@@ -3950,7 +3973,7 @@ (define_insn "@pred_narrow_<optab><mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " 0, 0, 0, 0,vr, vr, vr, vr, vk, vk, vk, vk")))
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " 0,vu, 0, vu,vu, vu, vu, 0, vu, vu, vu, 0")))]
"TARGET_VECTOR"
- "vn<insn>.w%o4\t%0,%3,%v4%p1"
+ "%^vn<insn>.w%o4\t%0,%3,%v4%p1"
[(set_attr "type" "vnshift")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3971,7 +3994,7 @@ (define_insn "@pred_narrow_<optab><mode>_scalar"
(match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")))
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vn<insn>.w%o4\t%0,%3,%4%p1"
+ "%^vn<insn>.w%o4\t%0,%3,%4%p1"
[(set_attr "type" "vnshift")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -3991,7 +4014,7 @@ (define_insn "@pred_trunc<mode>"
(match_operand:VWEXTI 3 "register_operand" " 0, 0, 0, 0, vr, vr"))
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vncvt.x.x.w\t%0,%3%p1"
+ "%^vncvt.x.x.w\t%0,%3%p1"
[(set_attr "type" "vnshift")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set_attr "vl_op_idx" "4")
@@ -4028,14 +4051,14 @@ (define_insn "@pred_<optab><mode>"
(match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
"@
- v<insn>.vv\t%0,%3,%4%p1
- v<insn>.vv\t%0,%3,%4%p1
- v<insn>.vv\t%0,%3,%4%p1
- v<insn>.vv\t%0,%3,%4%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
- v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1"
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<insn>.vv\t%0,%3,%4%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1
+ %^v<binop_vi_variant_insn>\t%0,<binop_vi_variant_op>%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4057,7 +4080,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:VI_QHS 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VI_QHS 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%4%p1"
+ "%^v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4078,7 +4101,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:<VEL> 4 "register_operand" " r, r, r, r")))
(match_operand:VI_QHS 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%4%p1"
+ "%^v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4132,7 +4155,7 @@ (define_insn "*pred_<optab><mode>_scalar"
(match_operand:VI_D 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%4%p1"
+ "%^v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4154,7 +4177,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
(match_operand:VI_D 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%4%p1"
+ "%^v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4208,7 +4231,7 @@ (define_insn "*pred_<optab><mode>_scalar"
(match_operand:<VEL> 4 "register_operand" " r, r, r, r")))
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%4%p1"
+ "%^v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4230,7 +4253,7 @@ (define_insn "*pred_<optab><mode>_extended_scalar"
(match_operand:<VSUBEL> 4 "register_operand" " r, r, r, r"))))
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<insn>.vx\t%0,%3,%4%p1"
+ "%^v<insn>.vx\t%0,%3,%4%p1"
[(set_attr "type" "<int_binop_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4252,7 +4275,7 @@ (define_insn "@pred_<sat_op><mode>"
(match_operand:VI 4 "register_operand" " vr, vr, vr, vr")] VSAT_OP)
(match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<sat_op>.vv\t%0,%3,%4%p1"
+ "%^v<sat_op>.vv\t%0,%3,%4%p1"
[(set_attr "type" "<sat_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4275,7 +4298,7 @@ (define_insn "@pred_<sat_op><mode>_scalar"
(match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ")] VSAT_ARITH_OP)
(match_operand:VI_QHS 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<sat_op>.vx\t%0,%3,%z4%p1"
+ "%^v<sat_op>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<sat_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4297,7 +4320,7 @@ (define_insn "@pred_<sat_op><mode>_scalar"
(match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK")] VSAT_SHIFT_OP)
(match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<sat_op>.v%o4\t%0,%3,%4%p1"
+ "%^v<sat_op>.v%o4\t%0,%3,%4%p1"
[(set_attr "type" "<sat_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4355,7 +4378,7 @@ (define_insn "*pred_<sat_op><mode>_scalar"
(match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ")] VSAT_ARITH_OP)
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<sat_op>.vx\t%0,%3,%z4%p1"
+ "%^v<sat_op>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<sat_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4378,7 +4401,7 @@ (define_insn "*pred_<sat_op><mode>_extended_scalar"
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ"))] VSAT_ARITH_OP)
(match_operand:VI_D 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "v<sat_op>.vx\t%0,%3,%z4%p1"
+ "%^v<sat_op>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "<sat_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -4401,7 +4424,7 @@ (define_insn "@pred_narrow_clip<v_su><mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "vector_shift_operand" " 0, 0, 0, 0,vr, vr, vr, vr, vk, vk, vk, vk")] VNCLIP)
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " 0,vu, 0, vu,vu, vu, vu, 0, vu, vu, vu, 0")))]
"TARGET_VECTOR"
- "vnclip<v_su>.w%o4\t%0,%3,%v4%p1"
+ "%^vnclip<v_su>.w%o4\t%0,%3,%v4%p1"
[(set_attr "type" "vnclip")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -4423,7 +4446,7 @@ (define_insn "@pred_narrow_clip<v_su><mode>_scalar"
(match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK, rK, rK")] VNCLIP)
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vnclip<v_su>.w%o4\t%0,%3,%4%p1"
+ "%^vnclip<v_su>.w%o4\t%0,%3,%4%p1"
[(set_attr "type" "vnclip")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -4466,7 +4489,7 @@ (define_insn "*pred_cmp<mode>_merge_tie_mask"
(match_operand:V_VLSI 4 "vector_arith_operand" "vrvi")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ "%^vms%B2.v%o4\t%0,%3,%v4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -4490,7 +4513,7 @@ (define_insn "*pred_cmp<mode>"
(match_operand:V_VLSI 5 "vector_arith_operand" " vr, vr, vi, vi")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.v%o5\t%0,%4,%v5%p1"
+ "%^vms%B3.v%o5\t%0,%4,%v5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4510,7 +4533,7 @@ (define_insn "*pred_cmp<mode>_narrow"
(match_operand:V_VLSI 5 "vector_arith_operand" " vrvi, vrvi, 0, 0, vrvi, 0, 0, vrvi, vrvi")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, 0, 0, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.v%o5\t%0,%4,%v5%p1"
+ "%^vms%B3.v%o5\t%0,%4,%v5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4546,7 +4569,7 @@ (define_insn "*pred_ltge<mode>_merge_tie_mask"
(match_operand:V_VLSI 4 "vector_neg_arith_operand" "vrvj")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.v%o4\t%0,%3,%v4,v0.t"
+ "%^vms%B2.v%o4\t%0,%3,%v4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -4570,7 +4593,7 @@ (define_insn "*pred_ltge<mode>"
(match_operand:V_VLSI 5 "vector_neg_arith_operand" " vr, vr, vj, vj")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.v%o5\t%0,%4,%v5%p1"
+ "%^vms%B3.v%o5\t%0,%4,%v5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4590,7 +4613,7 @@ (define_insn "*pred_ltge<mode>_narrow"
(match_operand:V_VLSI 5 "vector_neg_arith_operand" " vrvj, vrvj, 0, 0, vrvj, 0, 0, vrvj, vrvj")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, 0, 0, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.v%o5\t%0,%4,%v5%p1"
+ "%^vms%B3.v%o5\t%0,%4,%v5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4628,7 +4651,7 @@ (define_insn "*pred_cmp<mode>_scalar_merge_tie_mask"
(match_operand:<VEL> 4 "register_operand" " r"))])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.vx\t%0,%3,%4,v0.t"
+ "%^vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -4653,7 +4676,7 @@ (define_insn "*pred_cmp<mode>_scalar"
(match_operand:<VEL> 5 "register_operand" " r, r"))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4674,7 +4697,7 @@ (define_insn "*pred_cmp<mode>_scalar_narrow"
(match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4712,7 +4735,7 @@ (define_insn "*pred_eqne<mode>_scalar_merge_tie_mask"
(match_operand:V_VLSI_QHS 3 "register_operand" " vr")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.vx\t%0,%3,%4,v0.t"
+ "%^vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -4737,7 +4760,7 @@ (define_insn "*pred_eqne<mode>_scalar"
(match_operand:V_VLSI_QHS 4 "register_operand" " vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4758,7 +4781,7 @@ (define_insn "*pred_eqne<mode>_scalar_narrow"
(match_operand:V_VLSI_QHS 4 "register_operand" " vr, 0, 0, vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4853,7 +4876,7 @@ (define_insn "*pred_cmp<mode>_scalar_merge_tie_mask"
(match_operand:<VEL> 4 "register_operand" " r"))])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.vx\t%0,%3,%4,v0.t"
+ "%^vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -4877,7 +4900,7 @@ (define_insn "*pred_eqne<mode>_scalar_merge_tie_mask"
(match_operand:V_VLSI_D 3 "register_operand" " vr")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.vx\t%0,%3,%4,v0.t"
+ "%^vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -4902,7 +4925,7 @@ (define_insn "*pred_cmp<mode>_scalar"
(match_operand:<VEL> 5 "register_operand" " r, r"))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4923,7 +4946,7 @@ (define_insn "*pred_cmp<mode>_scalar_narrow"
(match_operand:<VEL> 5 "register_operand" " r, r, r, r, r"))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4944,7 +4967,7 @@ (define_insn "*pred_eqne<mode>_scalar"
(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4965,7 +4988,7 @@ (define_insn "*pred_eqne<mode>_scalar_narrow"
(match_operand:V_VLSI_D 4 "register_operand" " vr, 0, 0, vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -4986,7 +5009,7 @@ (define_insn "*pred_cmp<mode>_extended_scalar_merge_tie_mask"
(match_operand:<VSUBEL> 4 "register_operand" " r")))])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.vx\t%0,%3,%4,v0.t"
+ "%^vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -5012,7 +5035,7 @@ (define_insn "*pred_cmp<mode>_extended_scalar"
(match_operand:<VSUBEL> 5 "register_operand" " r, r")))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5033,7 +5056,7 @@ (define_insn "*pred_cmp<mode>_extended_scalar_narrow"
(match_operand:<VSUBEL> 5 "register_operand" " r, r, r, r, r")))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5054,7 +5077,7 @@ (define_insn "*pred_eqne<mode>_extended_scalar_merge_tie_mask"
(match_operand:V_VLSI_D 3 "register_operand" " vr")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vms%B2.vx\t%0,%3,%4,v0.t"
+ "%^vms%B2.vx\t%0,%3,%4,v0.t"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -5080,7 +5103,7 @@ (define_insn "*pred_eqne<mode>_extended_scalar"
(match_operand:V_VLSI_D 4 "register_operand" " vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5101,7 +5124,7 @@ (define_insn "*pred_eqne<mode>_extended_scalar_narrow"
(match_operand:V_VLSI_D 4 "register_operand" " vr, 0, 0, vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vms%B3.vx\t%0,%4,%5%p1"
+ "%^vms%B3.vx\t%0,%4,%5%p1"
[(set_attr "type" "vicmp")
(set_attr "mode" "<MODE>")])
@@ -5270,12 +5293,12 @@ (define_insn "*pred_mul_plus<mode>_undef"
(match_operand:V_VLSI 2 "vector_undef_operand")))]
"TARGET_VECTOR"
"@
- vmadd.vv\t%0,%4,%5%p1
- vmacc.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%4\;vmacc.vv\t%0,%3,%4%p1
- vmadd.vv\t%0,%4,%5%p1
- vmacc.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%5\;vmacc.vv\t%0,%3,%4%p1"
+ %^vmadd.vv\t%0,%4,%5%p1
+ %^vmacc.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vv\t%0,%3,%4%p1
+ %^vmadd.vv\t%0,%4,%5%p1
+ %^vmacc.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%5\;%^vmacc.vv\t%0,%3,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")])
@@ -5298,10 +5321,10 @@ (define_insn "*pred_madd<mode>"
(match_dup 2)))]
"TARGET_VECTOR"
"@
- vmadd.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1
- vmadd.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1"
+ %^vmadd.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vmadd.vv\t%0,%3,%4%p1
+ %^vmadd.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vmadd.vv\t%0,%3,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -5329,10 +5352,10 @@ (define_insn "*pred_macc<mode>"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vmacc.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1
- vmacc.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1"
+ %^vmacc.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vv\t%0,%2,%3%p1
+ %^vmacc.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vv\t%0,%2,%3%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -5382,10 +5405,10 @@ (define_insn "*pred_madd<mode>_scalar"
(match_dup 3)))]
"TARGET_VECTOR"
"@
- vmadd.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1
- vmadd.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1"
+ %^vmadd.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vmadd.vx\t%0,%2,%4%p1
+ %^vmadd.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vmadd.vx\t%0,%2,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -5414,10 +5437,10 @@ (define_insn "*pred_macc<mode>_scalar"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vmacc.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1
- vmacc.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1"
+ %^vmacc.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1
+ %^vmacc.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -5482,10 +5505,10 @@ (define_insn "*pred_madd<mode>_extended_scalar"
(match_dup 3)))]
"TARGET_VECTOR"
"@
- vmadd.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1
- vmadd.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1"
+ %^vmadd.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vmadd.vx\t%0,%2,%4%p1
+ %^vmadd.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vmadd.vx\t%0,%2,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -5515,10 +5538,10 @@ (define_insn "*pred_macc<mode>_extended_scalar"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vmacc.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1
- vmacc.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1"
+ %^vmacc.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1
+ %^vmacc.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vmacc.vx\t%0,%2,%3%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -5568,12 +5591,12 @@ (define_insn "*pred_minus_mul<mode>_undef"
(match_operand:V_VLSI 2 "vector_undef_operand")))]
"TARGET_VECTOR"
"@
- vnmsub.vv\t%0,%4,%5%p1
- vnmsac.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1
- vnmsub.vv\t%0,%4,%5%p1
- vnmsac.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1"
+ %^vnmsub.vv\t%0,%4,%5%p1
+ %^vnmsac.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vnmsub.vv\t%0,%4,%5%p1
+ %^vnmsub.vv\t%0,%4,%5%p1
+ %^vnmsac.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vnmsub.vv\t%0,%4,%5%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")])
@@ -5596,10 +5619,10 @@ (define_insn "*pred_nmsub<mode>"
(match_dup 2)))]
"TARGET_VECTOR"
"@
- vnmsub.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1
- vnmsub.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1"
+ %^vnmsub.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vnmsub.vv\t%0,%3,%4%p1
+ %^vnmsub.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vnmsub.vv\t%0,%3,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -5627,10 +5650,10 @@ (define_insn "*pred_nmsac<mode>"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vnmsac.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1
- vnmsac.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1"
+ %^vnmsac.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vnmsac.vv\t%0,%2,%3%p1
+ %^vnmsac.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vnmsac.vv\t%0,%2,%3%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -5680,10 +5703,10 @@ (define_insn "*pred_nmsub<mode>_scalar"
(match_dup 3)))]
"TARGET_VECTOR"
"@
- vnmsub.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1
- vnmsub.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1"
+ %^vnmsub.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1
+ %^vnmsub.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -5712,10 +5735,10 @@ (define_insn "*pred_nmsac<mode>_scalar"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vnmsac.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1
- vnmsac.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1"
+ %^vnmsac.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1
+ %^vnmsac.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -5780,10 +5803,10 @@ (define_insn "*pred_nmsub<mode>_extended_scalar"
(match_dup 3)))]
"TARGET_VECTOR"
"@
- vnmsub.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1
- vnmsub.vx\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1"
+ %^vnmsub.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1
+ %^vnmsub.vx\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vnmsub.vx\t%0,%2,%4%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -5813,10 +5836,10 @@ (define_insn "*pred_nmsac<mode>_extended_scalar"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vnmsac.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1
- vnmsac.vx\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1"
+ %^vnmsac.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1
+ %^vnmsac.vx\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vnmsac.vx\t%0,%2,%3%p1"
[(set_attr "type" "vimuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -5852,7 +5875,7 @@ (define_insn "@pred_widen_mul_plus<su><mode>"
(match_operand:VWEXTI 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vwmacc<u>.vv\t%0,%3,%4%p1"
+ "%^vwmacc<u>.vv\t%0,%3,%4%p1"
[(set_attr "type" "viwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -5877,7 +5900,7 @@ (define_insn "@pred_widen_mul_plus<su><mode>_scalar"
(match_operand:VWEXTI 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vwmacc<u>.vx\t%0,%3,%4%p1"
+ "%^vwmacc<u>.vx\t%0,%3,%4%p1"
[(set_attr "type" "viwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -5901,7 +5924,7 @@ (define_insn "@pred_widen_mul_plussu<mode>"
(match_operand:VWEXTI 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vwmaccsu.vv\t%0,%3,%4%p1"
+ "%^vwmaccsu.vv\t%0,%3,%4%p1"
[(set_attr "type" "viwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -5926,7 +5949,7 @@ (define_insn "@pred_widen_mul_plussu<mode>_scalar"
(match_operand:VWEXTI 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vwmaccsu.vx\t%0,%3,%4%p1"
+ "%^vwmaccsu.vx\t%0,%3,%4%p1"
[(set_attr "type" "viwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -5951,7 +5974,7 @@ (define_insn "@pred_widen_mul_plusus<mode>_scalar"
(match_operand:VWEXTI 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vwmaccus.vx\t%0,%3,%4%p1"
+ "%^vwmaccus.vx\t%0,%3,%4%p1"
[(set_attr "type" "viwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -5986,7 +6009,7 @@ (define_insn "@pred_<optab><mode>"
(match_operand:VB_VLS 4 "register_operand" " vr"))
(match_operand:VB_VLS 2 "vector_undef_operand" " vu")))]
"TARGET_VECTOR"
- "vm<insn>.mm\t%0,%3,%4"
+ "%^vm<insn>.mm\t%0,%3,%4"
[(set_attr "type" "vmalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "5")
@@ -6007,7 +6030,7 @@ (define_insn "@pred_n<optab><mode>"
(match_operand:VB_VLS 4 "register_operand" " vr")))
(match_operand:VB_VLS 2 "vector_undef_operand" " vu")))]
"TARGET_VECTOR"
- "vm<ninsn>.mm\t%0,%3,%4"
+ "%^vm<ninsn>.mm\t%0,%3,%4"
[(set_attr "type" "vmalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "5")
@@ -6028,7 +6051,7 @@ (define_insn "@pred_<optab>not<mode>"
(match_operand:VB_VLS 4 "register_operand" " vr")))
(match_operand:VB_VLS 2 "vector_undef_operand" " vu")))]
"TARGET_VECTOR"
- "vm<insn>n.mm\t%0,%3,%4"
+ "%^vm<insn>n.mm\t%0,%3,%4"
[(set_attr "type" "vmalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "5")
@@ -6047,7 +6070,7 @@ (define_insn "@pred_not<mode>"
(match_operand:VB_VLS 3 "register_operand" " vr"))
(match_operand:VB_VLS 2 "vector_undef_operand" " vu")))]
"TARGET_VECTOR"
- "vmnot.m\t%0,%3"
+ "%^vmnot.m\t%0,%3"
[(set_attr "type" "vmalu")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -6065,7 +6088,7 @@ (define_insn "@pred_popcount<VB:mode><P:mode>"
(reg:SI VL_REGNUM)
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
"TARGET_VECTOR"
- "vcpop.m\t%0,%2%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vmpopc.m\t%0,%2%p1" : "vcpop.m\t%0,%2%p1"; }
[(set_attr "type" "vmpop")
(set_attr "mode" "<VB:MODE>")])
@@ -6083,7 +6106,7 @@ (define_insn "@pred_ffs<VB:mode><P:mode>"
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))
(const_int -1)))]
"TARGET_VECTOR"
- "vfirst.m\t%0,%2%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vmfirst.m\t%0,%2%p1" : "vfirst.m\t%0,%2%p1"; }
[(set_attr "type" "vmffs")
(set_attr "mode" "<VB:MODE>")])
@@ -6101,7 +6124,7 @@ (define_insn "@pred_<misc_op><mode>"
[(match_operand:VB 3 "register_operand" " vr, vr")] VMISC)
(match_operand:VB 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vm<misc_op>.m\t%0,%3%p1"
+ "%^vm<misc_op>.m\t%0,%3%p1"
[(set_attr "type" "vmsfs")
(set_attr "mode" "<MODE>")])
@@ -6120,7 +6143,7 @@ (define_insn "@pred_iota<mode>"
[(match_operand:<VM> 3 "register_operand" " vr, vr")] UNSPEC_VIOTA)
(match_operand:VI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "viota.m\t%0,%3%p1"
+ "%^viota.m\t%0,%3%p1"
[(set_attr "type" "vmiota")
(set_attr "mode" "<MODE>")])
@@ -6138,7 +6161,7 @@ (define_insn "@pred_series<mode>"
(vec_series:V_VLSI (const_int 0) (const_int 1))
(match_operand:V_VLSI 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vid.v\t%0%p1"
+ "%^vid.v\t%0%p1"
[(set_attr "type" "vmidx")
(set_attr "mode" "<MODE>")])
@@ -6170,7 +6193,7 @@ (define_insn "@pred_<optab><mode>"
(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr"))
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.vv\t%0,%3,%4%p1"
+ "%^vf<insn>.vv\t%0,%3,%4%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6192,7 +6215,7 @@ (define_insn "@pred_<optab><mode>"
(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr"))
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.vv\t%0,%3,%4%p1"
+ "%^vf<insn>.vv\t%0,%3,%4%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -6236,7 +6259,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:VF 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.vf\t%0,%3,%4%p1"
+ "%^vf<insn>.vf\t%0,%3,%4%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6259,7 +6282,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:VF 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.vf\t%0,%3,%4%p1"
+ "%^vf<insn>.vf\t%0,%3,%4%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -6304,7 +6327,7 @@ (define_insn "@pred_<optab><mode>_scalar"
(match_operand:<VEL> 4 "register_operand" " f, f, f, f")))
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.vf\t%0,%3,%4%p1"
+ "%^vf<insn>.vf\t%0,%3,%4%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6329,7 +6352,7 @@ (define_insn "@pred_<optab><mode>_reverse_scalar"
(match_operand:VF 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfr<insn>.vf\t%0,%3,%4%p1"
+ "%^vfr<insn>.vf\t%0,%3,%4%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6351,7 +6374,7 @@ (define_insn "@pred_<copysign><mode>"
(match_operand:V_VLSF 4 "register_operand" " vr, vr, vr, vr")] VCOPYSIGNS)
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfsgnj<nx>.vv\t%0,%3,%4%p1"
+ "%^vfsgnj<nx>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vfsgnj")
(set_attr "mode" "<MODE>")])
@@ -6372,7 +6395,7 @@ (define_insn "@pred_ncopysign<mode>"
(match_operand:VF 4 "register_operand" " vr, vr, vr, vr")] UNSPEC_VCOPYSIGN))
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfsgnjn.vv\t%0,%3,%4%p1"
+ "%^vfsgnjn.vv\t%0,%3,%4%p1"
[(set_attr "type" "vfsgnj")
(set_attr "mode" "<MODE>")])
@@ -6393,7 +6416,7 @@ (define_insn "@pred_<copysign><mode>_scalar"
(match_operand:<VEL> 4 "register_operand" " f, f, f, f"))] VCOPYSIGNS)
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfsgnj<nx>.vf\t%0,%3,%4%p1"
+ "%^vfsgnj<nx>.vf\t%0,%3,%4%p1"
[(set_attr "type" "vfsgnj")
(set_attr "mode" "<MODE>")])
@@ -6415,7 +6438,7 @@ (define_insn "@pred_ncopysign<mode>_scalar"
(match_operand:<VEL> 4 "register_operand" " f, f, f, f"))] UNSPEC_VCOPYSIGN))
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfsgnjn.vf\t%0,%3,%4%p1"
+ "%^vfsgnjn.vf\t%0,%3,%4%p1"
[(set_attr "type" "vfsgnj")
(set_attr "mode" "<MODE>")])
@@ -6471,12 +6494,12 @@ (define_insn "*pred_mul_<optab><mode>_undef"
(match_operand:V_VLSF 2 "vector_undef_operand")))]
"TARGET_VECTOR"
"@
- vf<madd_msub>.vv\t%0,%4,%5%p1
- vf<macc_msac>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%3\;vf<madd_msub>.vv\t%0,%4,%5%p1
- vf<madd_msub>.vv\t%0,%4,%5%p1
- vf<macc_msac>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%3\;vf<madd_msub>.vv\t%0,%4,%5%p1"
+ %^vf<madd_msub>.vv\t%0,%4,%5%p1
+ %^vf<macc_msac>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vv\t%0,%4,%5%p1
+ %^vf<madd_msub>.vv\t%0,%4,%5%p1
+ %^vf<macc_msac>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vv\t%0,%4,%5%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6503,10 +6526,10 @@ (define_insn "*pred_<madd_msub><mode>"
(match_dup 2)))]
"TARGET_VECTOR"
"@
- vf<madd_msub>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vf<madd_msub>.vv\t%0,%3,%4%p1
- vf<madd_msub>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vf<madd_msub>.vv\t%0,%3,%4%p1"
+ %^vf<madd_msub>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vf<madd_msub>.vv\t%0,%3,%4%p1
+ %^vf<madd_msub>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vf<madd_msub>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -6538,10 +6561,10 @@ (define_insn "*pred_<macc_msac><mode>"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vf<macc_msac>.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<macc_msac>.vv\t%0,%2,%3%p1
- vf<macc_msac>.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<macc_msac>.vv\t%0,%2,%3%p1"
+ %^vf<macc_msac>.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vv\t%0,%2,%3%p1
+ %^vf<macc_msac>.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vv\t%0,%2,%3%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -6597,10 +6620,10 @@ (define_insn "*pred_<madd_msub><mode>_scalar"
(match_dup 3)))]
"TARGET_VECTOR"
"@
- vf<madd_msub>.vf\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vf<madd_msub>.vf\t%0,%2,%4%p1
- vf<madd_msub>.vf\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vf<madd_msub>.vf\t%0,%2,%4%p1"
+ %^vf<madd_msub>.vf\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vf\t%0,%2,%4%p1
+ %^vf<madd_msub>.vf\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<madd_msub>.vf\t%0,%2,%4%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -6633,10 +6656,10 @@ (define_insn "*pred_<macc_msac><mode>_scalar"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vf<macc_msac>.vf\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<macc_msac>.vf\t%0,%2,%3%p1
- vf<macc_msac>.vf\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<macc_msac>.vf\t%0,%2,%3%p1"
+ %^vf<macc_msac>.vf\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vf\t%0,%2,%3%p1
+ %^vf<macc_msac>.vf\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<macc_msac>.vf\t%0,%2,%3%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -6694,12 +6717,12 @@ (define_insn "*pred_mul_neg_<optab><mode>_undef"
(match_operand:V_VLSF 2 "vector_undef_operand")))]
"TARGET_VECTOR"
"@
- vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
- vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
- vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
- vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vv\t%0,%4,%5%p1"
+ %^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
+ %^vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
+ %^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1
+ %^vf<nmsac_nmacc>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vv\t%0,%4,%5%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6727,10 +6750,10 @@ (define_insn "*pred_<nmsub_nmadd><mode>"
(match_dup 2)))]
"TARGET_VECTOR"
"@
- vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
- vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
- vmv.v.v\t%0,%2\;vf<nmsub_nmadd>.vv\t%0,%3,%4%p1"
+ %^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
+ %^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1
+ %^vmv.v.v\t%0,%2\;%^vf<nmsub_nmadd>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -6763,10 +6786,10 @@ (define_insn "*pred_<nmsac_nmacc><mode>"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
- vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vv\t%0,%2,%3%p1"
+ %^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
+ %^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vv\t%0,%2,%3%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -6824,10 +6847,10 @@ (define_insn "*pred_<nmsub_nmadd><mode>_scalar"
(match_dup 3)))]
"TARGET_VECTOR"
"@
- vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
- vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
- vmv.v.v\t%0,%3\;vf<nmsub_nmadd>.vf\t%0,%2,%4%p1"
+ %^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
+ %^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1
+ %^vmv.v.v\t%0,%3\;%^vf<nmsub_nmadd>.vf\t%0,%2,%4%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "4")
@@ -6861,10 +6884,10 @@ (define_insn "*pred_<nmsac_nmacc><mode>_scalar"
(match_dup 4)))]
"TARGET_VECTOR"
"@
- vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
- vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
- vmv.v.v\t%0,%4\;vf<nmsac_nmacc>.vf\t%0,%2,%3%p1"
+ %^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
+ %^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1
+ %^vmv.v.v\t%0,%4\;%^vf<nmsac_nmacc>.vf\t%0,%2,%3%p1"
[(set_attr "type" "vfmuladd")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "2")
@@ -6903,7 +6926,7 @@ (define_insn "@pred_<optab><mode>"
(match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.v\t%0,%3%p1"
+ "%^vf<insn>.v\t%0,%3%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -6928,7 +6951,7 @@ (define_insn "@pred_<optab><mode>"
(match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<insn>.v\t%0,%3%p1"
+ "%^vf<insn>.v\t%0,%3%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")
(set_attr "vl_op_idx" "4")
@@ -6951,7 +6974,7 @@ (define_insn "@pred_<misc_op><mode>"
[(match_operand:VF 3 "register_operand" " vr, vr, vr, vr")] VFMISC)
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<misc_op>.v\t%0,%3%p1"
+ "%^vf<misc_op>.v\t%0,%3%p1"
[(set_attr "type" "<float_insn_type>")
(set_attr "mode" "<MODE>")])
@@ -6972,7 +6995,7 @@ (define_insn "@pred_<misc_frm_op><mode>"
[(match_operand:VF 3 "register_operand" " vr, vr, vr, vr")] VFMISC_FRM)
(match_operand:VF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vf<misc_frm_op>.v\t%0,%3%p1"
+ "%^vf<misc_frm_op>.v\t%0,%3%p1"
[(set_attr "type" "<float_frm_insn_type>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -6993,7 +7016,7 @@ (define_insn "@pred_class<mode>"
[(match_operand:VF 3 "register_operand" " vr, vr, vr, vr")] UNSPEC_VFCLASS)
(match_operand:<VCONVERT> 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfclass.v\t%0,%3%p1"
+ "%^vfclass.v\t%0,%3%p1"
[(set_attr "type" "vfclass")
(set_attr "mode" "<MODE>")])
@@ -7026,7 +7049,7 @@ (define_insn "@pred_dual_widen_<optab><mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")))
(match_operand:VWEXTF 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfw<insn>.vv\t%0,%3,%4%p1"
+ "%^vfw<insn>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vf<widen_binop_insn_type>")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7053,7 +7076,7 @@ (define_insn "@pred_dual_widen_<optab><mode>_scalar"
(match_operand:<VSUBEL> 4 "register_operand" " f, f"))))
(match_operand:VWEXTF 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfw<insn>.vf\t%0,%3,%4%p1"
+ "%^vfw<insn>.vf\t%0,%3,%4%p1"
[(set_attr "type" "vf<widen_binop_insn_type>")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7078,7 +7101,7 @@ (define_insn "@pred_single_widen_add<mode>"
(match_operand:VWEXTF 3 "register_operand" " vr, vr"))
(match_operand:VWEXTF 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfwadd.wv\t%0,%3,%4%p1"
+ "%^vfwadd.wv\t%0,%3,%4%p1"
[(set_attr "type" "vfwalu")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7103,7 +7126,7 @@ (define_insn "@pred_single_widen_sub<mode>"
(match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" " vr, vr")))
(match_operand:VWEXTF 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfwsub.wv\t%0,%3,%4%p1"
+ "%^vfwsub.wv\t%0,%3,%4%p1"
[(set_attr "type" "vfwalu")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7129,7 +7152,7 @@ (define_insn "@pred_single_widen_<plus_minus:optab><mode>_scalar"
(match_operand:<VSUBEL> 4 "register_operand" " f, f"))))
(match_operand:VWEXTF 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfw<insn>.wf\t%0,%3,%4%p1"
+ "%^vfw<insn>.wf\t%0,%3,%4%p1"
[(set_attr "type" "vf<widen_binop_insn_type>")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7164,7 +7187,7 @@ (define_insn "@pred_widen_mul_<optab><mode>"
(match_operand:VWEXTF 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vfw<macc_msac>.vv\t%0,%3,%4%p1"
+ "%^vfw<macc_msac>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vfwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7193,7 +7216,7 @@ (define_insn "@pred_widen_mul_<optab><mode>_scalar"
(match_operand:VWEXTF 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vfw<macc_msac>.vf\t%0,%3,%4%p1"
+ "%^vfw<macc_msac>.vf\t%0,%3,%4%p1"
[(set_attr "type" "vfwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7222,7 +7245,7 @@ (define_insn "@pred_widen_mul_neg_<optab><mode>"
(match_operand:VWEXTF 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vfw<nmsac_nmacc>.vv\t%0,%3,%4%p1"
+ "%^vfw<nmsac_nmacc>.vv\t%0,%3,%4%p1"
[(set_attr "type" "vfwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7252,7 +7275,7 @@ (define_insn "@pred_widen_mul_neg_<optab><mode>_scalar"
(match_operand:VWEXTF 2 "register_operand" " 0"))
(match_dup 2)))]
"TARGET_VECTOR"
- "vfw<nmsac_nmacc>.vf\t%0,%3,%4%p1"
+ "%^vfw<nmsac_nmacc>.vf\t%0,%3,%4%p1"
[(set_attr "type" "vfwmuladd")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7298,7 +7321,7 @@ (define_insn "*pred_cmp<mode>"
(match_operand:V_VLSF 5 "register_operand" " vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vmf%B3.vv\t%0,%4,%5%p1"
+ "%^vmf%B3.vv\t%0,%4,%5%p1"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")])
@@ -7317,7 +7340,7 @@ (define_insn "*pred_cmp<mode>_narrow_merge_tie_mask"
(match_operand:V_VLSF 4 "register_operand" " vr")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vmf%B2.vv\t%0,%3,%4,v0.t"
+ "%^vmf%B2.vv\t%0,%3,%4,v0.t"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -7341,7 +7364,7 @@ (define_insn "*pred_cmp<mode>_narrow"
(match_operand:V_VLSF 5 "register_operand" " vr, vr, 0, 0, vr, 0, 0, vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, vu, vu, 0, 0, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vmf%B3.vv\t%0,%4,%5%p1"
+ "%^vmf%B3.vv\t%0,%4,%5%p1"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")])
@@ -7379,7 +7402,7 @@ (define_insn "*pred_cmp<mode>_scalar_merge_tie_mask"
(match_operand:<VEL> 4 "register_operand" " f"))])
(match_dup 1)))]
"TARGET_VECTOR"
- "vmf%B2.vf\t%0,%3,%4,v0.t"
+ "%^vmf%B2.vf\t%0,%3,%4,v0.t"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -7404,7 +7427,7 @@ (define_insn "*pred_cmp<mode>_scalar"
(match_operand:<VEL> 5 "register_operand" " f, f"))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vmf%B3.vf\t%0,%4,%5%p1"
+ "%^vmf%B3.vf\t%0,%4,%5%p1"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")])
@@ -7425,7 +7448,7 @@ (define_insn "*pred_cmp<mode>_scalar_narrow"
(match_operand:<VEL> 5 "register_operand" " f, f, f, f, f"))])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vmf%B3.vf\t%0,%4,%5%p1"
+ "%^vmf%B3.vf\t%0,%4,%5%p1"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")])
@@ -7463,7 +7486,7 @@ (define_insn "*pred_eqne<mode>_scalar_merge_tie_mask"
(match_operand:V_VLSF 3 "register_operand" " vr")])
(match_dup 1)))]
"TARGET_VECTOR"
- "vmf%B2.vf\t%0,%3,%4,v0.t"
+ "%^vmf%B2.vf\t%0,%3,%4,v0.t"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")
(set_attr "merge_op_idx" "1")
@@ -7488,7 +7511,7 @@ (define_insn "*pred_eqne<mode>_scalar"
(match_operand:V_VLSF 4 "register_operand" " vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_le_one (<MODE>mode)"
- "vmf%B3.vf\t%0,%4,%5%p1"
+ "%^vmf%B3.vf\t%0,%4,%5%p1"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")])
@@ -7509,7 +7532,7 @@ (define_insn "*pred_eqne<mode>_scalar_narrow"
(match_operand:V_VLSF 4 "register_operand" " vr, 0, 0, vr, vr")])
(match_operand:<VM> 2 "vector_merge_operand" " vu, vu, 0, vu, 0")))]
"TARGET_VECTOR && riscv_vector::cmp_lmul_gt_one (<MODE>mode)"
- "vmf%B3.vf\t%0,%4,%5%p1"
+ "%^vmf%B3.vf\t%0,%4,%5%p1"
[(set_attr "type" "vfcmp")
(set_attr "mode" "<MODE>")])
@@ -7536,7 +7559,7 @@ (define_insn "@pred_merge<mode>_scalar"
(match_operand:<VM> 4 "register_operand" " vm,vm"))
(match_operand:V_VLSF 1 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfmerge.vfm\t%0,%2,%3,%4"
+ "%^vfmerge.vfm\t%0,%2,%3,%4"
[(set_attr "type" "vfmerge")
(set_attr "mode" "<MODE>")])
@@ -7564,7 +7587,7 @@ (define_insn "@pred_fcvt_x<v_su>_f<mode>"
[(match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr")] VFCVTS)
(match_operand:<VCONVERT> 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfcvt.x<v_su>.f.v\t%0,%3%p1"
+ "%^vfcvt.x<v_su>.f.v\t%0,%3%p1"
[(set_attr "type" "vfcvtftoi")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -7584,7 +7607,7 @@ (define_insn "@pred_<fix_cvt><mode>"
(any_fix:<VCONVERT>
(match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:<VCONVERT> 2 "vector_merge_operand" " vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"vfcvt.rtz.x<u>.f.v\t%0,%3%p1"
[(set_attr "type" "vfcvtftoi")
(set_attr "mode" "<MODE>")])
@@ -7606,7 +7629,7 @@ (define_insn "@pred_<float_cvt><mode>"
(match_operand:<VCONVERT> 3 "register_operand" " vr, vr, vr, vr"))
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfcvt.f.x<u>.v\t%0,%3%p1"
+ "%^vfcvt.f.x<u>.v\t%0,%3%p1"
[(set_attr "type" "vfcvtitof")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -7636,7 +7659,7 @@ (define_insn "@pred_widen_fcvt_x<v_su>_f<mode>"
[(match_operand:<VNCONVERT> 3 "register_operand" " vr, vr")] VFCVTS)
(match_operand:VWCONVERTI 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfwcvt.x<v_su>.f.v\t%0,%3%p1"
+ "%^vfwcvt.x<v_su>.f.v\t%0,%3%p1"
[(set_attr "type" "vfwcvtftoi")
(set_attr "mode" "<VNCONVERT>")
(set (attr "frm_mode")
@@ -7656,7 +7679,7 @@ (define_insn "@pred_widen_<fix_cvt><mode>"
(any_fix:VWCONVERTI
(match_operand:<VNCONVERT> 3 "register_operand" " vr, vr"))
(match_operand:VWCONVERTI 2 "vector_merge_operand" " vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"vfwcvt.rtz.x<u>.f.v\t%0,%3%p1"
[(set_attr "type" "vfwcvtftoi")
(set_attr "mode" "<VNCONVERT>")])
@@ -7676,7 +7699,7 @@ (define_insn "@pred_widen_<float_cvt><mode>"
(match_operand:<VNCONVERT> 3 "register_operand" " vr, vr"))
(match_operand:V_VLSF 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfwcvt.f.x<u>.v\t%0,%3%p1"
+ "%^vfwcvt.f.x<u>.v\t%0,%3%p1"
[(set_attr "type" "vfwcvtitof")
(set_attr "mode" "<VNCONVERT>")])
@@ -7695,7 +7718,7 @@ (define_insn "@pred_extend<mode>"
(match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" " vr, vr"))
(match_operand:VWEXTF_ZVFHMIN 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vfwcvt.f.f.v\t%0,%3%p1"
+ "%^vfwcvt.f.f.v\t%0,%3%p1"
[(set_attr "type" "vfwcvtftof")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -7723,7 +7746,7 @@ (define_insn "@pred_narrow_fcvt_x<v_su>_f<mode>"
[(match_operand:V_VLSF 3 "register_operand" " 0, 0, 0, 0, vr, vr")] VFCVTS)
(match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfncvt.x<v_su>.f.w\t%0,%3%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vfncvt.x<v_su>.f.v\t%0,%3%p1" : "vfncvt.x<v_su>.f.w\t%0,%3%p1"; }
[(set_attr "type" "vfncvtftoi")
(set_attr "mode" "<VNCONVERT>")
(set (attr "frm_mode")
@@ -7743,7 +7766,7 @@ (define_insn "@pred_narrow_<fix_cvt><mode>"
(any_fix:<VNCONVERT>
(match_operand:V_VLSF 3 "register_operand" " 0, 0, 0, 0, vr, vr"))
(match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"vfncvt.rtz.x<u>.f.w\t%0,%3%p1"
[(set_attr "type" "vfncvtftoi")
(set_attr "mode" "<VNCONVERT>")])
@@ -7765,7 +7788,7 @@ (define_insn "@pred_narrow_<float_cvt><mode>"
(match_operand:VWCONVERTI 3 "register_operand" " 0, 0, 0, 0, vr, vr"))
(match_operand:<VNCONVERT> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfncvt.f.x<u>.w\t%0,%3%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vfncvt.f.x<u>.v\t%0,%3%p1" : "vfncvt.f.x<u>.w\t%0,%3%p1"; }
[(set_attr "type" "vfncvtitof")
(set_attr "mode" "<VNCONVERT>")
(set (attr "frm_mode")
@@ -7788,7 +7811,7 @@ (define_insn "@pred_trunc<mode>"
(match_operand:VWEXTF_ZVFHMIN 3 "register_operand" " 0, 0, 0, 0, vr, vr"))
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
"TARGET_VECTOR"
- "vfncvt.f.f.w\t%0,%3%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vfncvt.f.f.v\t%0,%3%p1" : "vfncvt.f.f.w\t%0,%3%p1"; }
[(set_attr "type" "vfncvtftof")
(set_attr "mode" "<V_DOUBLE_TRUNC>")
(set (attr "frm_mode")
@@ -7809,7 +7832,7 @@ (define_insn "@pred_rod_trunc<mode>"
[(float_truncate:<V_DOUBLE_TRUNC>
(match_operand:VWEXTF_ZVFHMIN 3 "register_operand" " 0, 0, 0, 0, vr, vr"))] UNSPEC_ROD)
(match_operand:<V_DOUBLE_TRUNC> 2 "vector_merge_operand" " vu, 0, vu, 0, vu, 0")))]
- "TARGET_VECTOR"
+ "TARGET_VECTOR && !TARGET_XTHEADVECTOR"
"vfncvt.rod.f.f.w\t%0,%3%p1"
[(set_attr "type" "vfncvtftof")
(set_attr "mode" "<V_DOUBLE_TRUNC>")])
@@ -7841,7 +7864,7 @@ (define_insn "@pred_<reduc_op><mode>"
] ANY_REDUC)
(match_operand:<V_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
"TARGET_VECTOR"
- "v<reduc_op>.vs\t%0,%3,%4%p1"
+ "%^v<reduc_op>.vs\t%0,%3,%4%p1"
[(set_attr "type" "vired")
(set_attr "mode" "<MODE>")])
@@ -7862,7 +7885,7 @@ (define_insn "@pred_<reduc_op><mode>"
] ANY_WREDUC)
(match_operand:<V_EXT_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
"TARGET_VECTOR"
- "v<reduc_op>.vs\t%0,%3,%4%p1"
+ "%^v<reduc_op>.vs\t%0,%3,%4%p1"
[(set_attr "type" "viwred")
(set_attr "mode" "<MODE>")])
@@ -7883,7 +7906,7 @@ (define_insn "@pred_<reduc_op><mode>"
] ANY_FREDUC)
(match_operand:<V_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
"TARGET_VECTOR"
- "vf<reduc_op>.vs\t%0,%3,%4%p1"
+ "%^vf<reduc_op>.vs\t%0,%3,%4%p1"
[(set_attr "type" "vfredu")
(set_attr "mode" "<MODE>")])
@@ -7906,7 +7929,7 @@ (define_insn "@pred_<reduc_op><mode>"
] ANY_FREDUC_SUM)
(match_operand:<V_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
"TARGET_VECTOR"
- "vf<reduc_op>.vs\t%0,%3,%4%p1"
+ "%^vf<reduc_op>.vs\t%0,%3,%4%p1"
[(set_attr "type" "vfred<order>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -7931,7 +7954,7 @@ (define_insn "@pred_<reduc_op><mode>"
] ANY_FWREDUC_SUM)
(match_operand:<V_EXT_LMUL1> 2 "vector_merge_operand" " vu, 0")] UNSPEC_REDUC))]
"TARGET_VECTOR"
- "vf<reduc_op>.vs\t%0,%3,%4%p1"
+ "%^vf<reduc_op>.vs\t%0,%3,%4%p1"
[(set_attr "type" "vfwred<order>")
(set_attr "mode" "<MODE>")
(set (attr "frm_mode")
@@ -7973,7 +7996,7 @@ (define_insn_and_split "*pred_extract_first<mode>"
(parallel [(const_int 0)]))
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
"TARGET_VECTOR"
- "vmv.x.s\t%0,%1"
+ "%^vmv.x.s\t%0,%1"
"known_gt (GET_MODE_BITSIZE (<VEL>mode), GET_MODE_BITSIZE (Pmode))"
[(const_int 0)]
{
@@ -8007,7 +8030,7 @@ (define_insn "@pred_extract_first_trunc<mode>"
(parallel [(const_int 0)]))
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)))]
"TARGET_VECTOR"
- "vmv.x.s\t%0,%1"
+ "%^vmv.x.s\t%0,%1"
[(set_attr "type" "vimovvx")
(set_attr "mode" "<MODE>")])
@@ -8036,7 +8059,7 @@ (define_insn "*pred_extract_first<mode>"
(parallel [(const_int 0)]))
(reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))]
"TARGET_VECTOR"
- "vfmv.f.s\t%0,%1"
+ "%^vfmv.f.s\t%0,%1"
[(set_attr "type" "vfmovvf")
(set_attr "mode" "<MODE>")])
@@ -8056,7 +8079,7 @@ (define_insn "@pred_slide<ud><mode>"
(match_operand:V_VLS 3 "register_operand" " vr, vr, vr, vr")
(match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK, rK, rK")] VSLIDES))]
"TARGET_VECTOR"
- "vslide<ud>.v%o4\t%0,%3,%4%p1"
+ "%^vslide<ud>.v%o4\t%0,%3,%4%p1"
[(set_attr "type" "vslide<ud>")
(set_attr "mode" "<MODE>")])
@@ -8076,7 +8099,7 @@ (define_insn "@pred_slide<ud><mode>"
(match_operand:V_VLSI_QHS 3 "register_operand" " vr, vr, vr, vr")
(match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ")] VSLIDES1))]
"TARGET_VECTOR"
- "vslide<ud>.vx\t%0,%3,%z4%p1"
+ "%^vslide<ud>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vislide<ud>")
(set_attr "mode" "<MODE>")])
@@ -8117,7 +8140,7 @@ (define_insn "*pred_slide<ud><mode>"
(match_operand:V_VLSI_D 3 "register_operand" " vr, vr, vr, vr")
(match_operand:<VEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ")] VSLIDES1))]
"TARGET_VECTOR"
- "vslide<ud>.vx\t%0,%3,%z4%p1"
+ "%^vslide<ud>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vislide<ud>")
(set_attr "mode" "<MODE>")])
@@ -8137,7 +8160,7 @@ (define_insn "*pred_slide<ud><mode>_extended"
(sign_extend:<VEL>
(match_operand:<VSUBEL> 4 "reg_or_0_operand" " rJ, rJ, rJ, rJ"))] VSLIDES1))]
"TARGET_VECTOR"
- "vslide<ud>.vx\t%0,%3,%z4%p1"
+ "%^vslide<ud>.vx\t%0,%3,%z4%p1"
[(set_attr "type" "vislide<ud>")
(set_attr "mode" "<MODE>")])
@@ -8157,7 +8180,7 @@ (define_insn "@pred_slide<ud><mode>"
(match_operand:V_VLSF 3 "register_operand" " vr, vr, vr, vr")
(match_operand:<VEL> 4 "register_operand" " f, f, f, f")] VFSLIDES1))]
"TARGET_VECTOR"
- "vfslide<ud>.vf\t%0,%3,%4%p1"
+ "%^vfslide<ud>.vf\t%0,%3,%4%p1"
[(set_attr "type" "vfslide<ud>")
(set_attr "mode" "<MODE>")])
@@ -8178,7 +8201,7 @@ (define_insn "@pred_gather<mode>"
(match_operand:<VINDEX> 4 "register_operand" " vr, vr")] UNSPEC_VRGATHER)
(match_operand:V_VLS 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vrgather.vv\t%0,%3,%4%p1"
+ "%^vrgather.vv\t%0,%3,%4%p1"
[(set_attr "type" "vgather")
(set_attr "mode" "<MODE>")])
@@ -8198,7 +8221,7 @@ (define_insn "@pred_gather<mode>_scalar"
(match_operand 4 "pmode_reg_or_uimm5_operand" " rK, rK")] UNSPEC_VRGATHER)
(match_operand:V_VLS 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vrgather.v%o4\t%0,%3,%4%p1"
+ "%^vrgather.v%o4\t%0,%3,%4%p1"
[(set_attr "type" "vgather")
(set_attr "mode" "<MODE>")])
@@ -8219,7 +8242,7 @@ (define_insn "@pred_gatherei16<mode>"
(match_operand:<VINDEXEI16> 4 "register_operand" " vr, vr")] UNSPEC_VRGATHEREI16)
(match_operand:VEI16 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vrgatherei16.vv\t%0,%3,%4%p1"
+ "%^vrgatherei16.vv\t%0,%3,%4%p1"
[(set_attr "type" "vgather")
(set_attr "mode" "<MODE>")])
@@ -8237,7 +8260,7 @@ (define_insn "@pred_compress<mode>"
(match_operand:V_VLS 2 "register_operand" " vr, vr")
(match_operand:V_VLS 1 "vector_merge_operand" " vu, 0")] UNSPEC_VCOMPRESS))]
"TARGET_VECTOR"
- "vcompress.vm\t%0,%2,%3"
+ "%^vcompress.vm\t%0,%2,%3"
[(set_attr "type" "vcompress")
(set_attr "mode" "<MODE>")])
@@ -8288,7 +8311,7 @@ (define_insn "@pred_fault_load<mode>"
(unspec:V [(match_dup 3)] UNSPEC_VLEFF)
(match_dup 2))] UNSPEC_MODIFY_VL))]
"TARGET_VECTOR"
- "vle<sew>ff.v\t%0,%3%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vleff.v\t%0,%3%p1" : "vle<sew>ff.v\t%0,%3%p1"; }
[(set_attr "type" "vldff")
(set_attr "mode" "<MODE>")])
@@ -8318,7 +8341,7 @@ (define_insn "@pred_unit_strided_load<mode>"
(mem:BLK (scratch))] UNSPEC_UNIT_STRIDED)
(match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
"TARGET_VECTOR"
- "vlseg<nf>e<sew>.v\t%0,(%z3)%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlseg<nf>e.v\t%0,(%z3)%p1" : "vlseg<nf>e<sew>.v\t%0,(%z3)%p1"; }
[(set_attr "type" "vlsegde")
(set_attr "mode" "<MODE>")])
@@ -8335,7 +8358,7 @@ (define_insn "@pred_unit_strided_store<mode>"
(match_operand:VT 2 "register_operand" " vr")
(mem:BLK (scratch))] UNSPEC_UNIT_STRIDED))]
"TARGET_VECTOR"
- "vsseg<nf>e<sew>.v\t%2,(%z1)%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsseg<nf>e.v\t%2,(%z1)%p0" : "vsseg<nf>e<sew>.v\t%2,(%z1)%p0"; }
[(set_attr "type" "vssegte")
(set_attr "mode" "<MODE>")])
@@ -8356,7 +8379,7 @@ (define_insn "@pred_strided_load<mode>"
(mem:BLK (scratch))] UNSPEC_STRIDED)
(match_operand:VT 2 "vector_merge_operand" " 0, vu, vu")))]
"TARGET_VECTOR"
- "vlsseg<nf>e<sew>.v\t%0,(%z3),%z4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlsseg<nf>e.v\t%0,(%z3),%z4%p1" : "vlsseg<nf>e<sew>.v\t%0,(%z3),%z4%p1"; }
[(set_attr "type" "vlsegds")
(set_attr "mode" "<MODE>")])
@@ -8374,7 +8397,7 @@ (define_insn "@pred_strided_store<mode>"
(match_operand:VT 3 "register_operand" " vr")
(mem:BLK (scratch))] UNSPEC_STRIDED))]
"TARGET_VECTOR"
- "vssseg<nf>e<sew>.v\t%3,(%z1),%z2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vssseg<nf>e.v\t%3,(%z1),%z2%p0" : "vssseg<nf>e<sew>.v\t%3,(%z1),%z2%p0"; }
[(set_attr "type" "vssegts")
(set_attr "mode" "<MODE>")])
@@ -8405,7 +8428,7 @@ (define_insn "@pred_fault_load<mode>"
[(match_dup 3) (mem:BLK (scratch))] UNSPEC_VLEFF)
(match_dup 2))] UNSPEC_MODIFY_VL))]
"TARGET_VECTOR"
- "vlseg<nf>e<sew>ff.v\t%0,(%z3)%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlseg<nf>eff.v\t%0,(%z3)%p1" : "vlseg<nf>e<sew>ff.v\t%0,(%z3)%p1"; }
[(set_attr "type" "vlsegdff")
(set_attr "mode" "<MODE>")])
@@ -8426,7 +8449,7 @@ (define_insn "@pred_indexed_<order>load<V1T:mode><RATIO64I:mode>"
(match_operand:RATIO64I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V1T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<RATIO64I:sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO64I:sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V1T:MODE>")])
@@ -8447,7 +8470,7 @@ (define_insn "@pred_indexed_<order>load<V2T:mode><RATIO32I:mode>"
(match_operand:RATIO32I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V2T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<RATIO32I:sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO32I:sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V2T:MODE>")])
@@ -8468,7 +8491,7 @@ (define_insn "@pred_indexed_<order>load<V4T:mode><RATIO16I:mode>"
(match_operand:RATIO16I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V4T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<RATIO16I:sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO16I:sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V4T:MODE>")])
@@ -8489,7 +8512,7 @@ (define_insn "@pred_indexed_<order>load<V8T:mode><RATIO8I:mode>"
(match_operand:RATIO8I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V8T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<RATIO8I:sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO8I:sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V8T:MODE>")])
@@ -8510,7 +8533,7 @@ (define_insn "@pred_indexed_<order>load<V16T:mode><RATIO4I:mode>"
(match_operand:RATIO4I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V16T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<RATIO4I:sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO4I:sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V16T:MODE>")])
@@ -8531,7 +8554,7 @@ (define_insn "@pred_indexed_<order>load<V32T:mode><RATIO2I:mode>"
(match_operand:RATIO2I 4 "register_operand" " vr, vr")] ORDER)
(match_operand:V32T 2 "vector_merge_operand" " vu, 0")))]
"TARGET_VECTOR"
- "vl<order>xseg<nf>ei<RATIO2I:sew>.v\t%0,(%z3),%4%p1"
+ { return TARGET_XTHEADVECTOR ? "th.vlxseg<nf>e.v\t%0,(%z3),%4%p1" : "vl<order>xseg<nf>ei<RATIO2I:sew>.v\t%0,(%z3),%4%p1"; }
[(set_attr "type" "vlsegd<order>x")
(set_attr "mode" "<V32T:MODE>")])
@@ -8548,7 +8571,7 @@ (define_insn "@pred_indexed_<order>store<V1T:mode><RATIO64I:mode>"
(match_operand:RATIO64I 2 "register_operand" " vr")
(match_operand:V1T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<RATIO64I:sew>.v\t%3,(%z1),%2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO64I:sew>.v\t%3,(%z1),%2%p0"; }
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V1T:MODE>")])
@@ -8565,7 +8588,7 @@ (define_insn "@pred_indexed_<order>store<V2T:mode><RATIO32I:mode>"
(match_operand:RATIO32I 2 "register_operand" " vr")
(match_operand:V2T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<RATIO32I:sew>.v\t%3,(%z1),%2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO32I:sew>.v\t%3,(%z1),%2%p0"; }
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V2T:MODE>")])
@@ -8582,7 +8605,7 @@ (define_insn "@pred_indexed_<order>store<V4T:mode><RATIO16I:mode>"
(match_operand:RATIO16I 2 "register_operand" " vr")
(match_operand:V4T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<RATIO16I:sew>.v\t%3,(%z1),%2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO16I:sew>.v\t%3,(%z1),%2%p0"; }
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V4T:MODE>")])
@@ -8599,7 +8622,7 @@ (define_insn "@pred_indexed_<order>store<V8T:mode><RATIO8I:mode>"
(match_operand:RATIO8I 2 "register_operand" " vr")
(match_operand:V8T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<RATIO8I:sew>.v\t%3,(%z1),%2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO8I:sew>.v\t%3,(%z1),%2%p0"; }
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V8T:MODE>")])
@@ -8616,7 +8639,7 @@ (define_insn "@pred_indexed_<order>store<V16T:mode><RATIO4I:mode>"
(match_operand:RATIO4I 2 "register_operand" " vr")
(match_operand:V16T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<RATIO4I:sew>.v\t%3,(%z1),%2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO4I:sew>.v\t%3,(%z1),%2%p0"; }
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V16T:MODE>")])
@@ -8633,7 +8656,7 @@ (define_insn "@pred_indexed_<order>store<V32T:mode><RATIO2I:mode>"
(match_operand:RATIO2I 2 "register_operand" " vr")
(match_operand:V32T 3 "register_operand" " vr")] ORDER))]
"TARGET_VECTOR"
- "vs<order>xseg<nf>ei<RATIO2I:sew>.v\t%3,(%z1),%2%p0"
+ { return TARGET_XTHEADVECTOR ? "th.vsxseg<nf>e.v\t%3,(%z1),%2%p0" : "vs<order>xseg<nf>ei<RATIO2I:sew>.v\t%3,(%z1),%2%p0"; }
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V32T:MODE>")])
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
index 3d81b179235..ef329e30785 100644
--- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c
@@ -1,4 +1,4 @@
/* { dg-do compile } */
/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
-#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */
+#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1)
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
2023-11-17 8:52 ` [PATCH 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
2023-11-17 8:55 ` [PATCH 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
@ 2023-11-17 8:56 ` Jun Sha (Joshua)
2023-11-17 8:58 ` [PATCH 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 8:56 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.
This patch is adding some tests for binary operations.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c: New test.
* gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp: New test.
---
.../rvv/xtheadvector/binop_vv_constraint-1.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vv_constraint-3.c | 27 +++++++
.../rvv/xtheadvector/binop_vv_constraint-4.c | 27 +++++++
.../rvv/xtheadvector/binop_vv_constraint-5.c | 29 ++++++++
.../rvv/xtheadvector/binop_vv_constraint-6.c | 28 +++++++
.../rvv/xtheadvector/binop_vv_constraint-7.c | 29 ++++++++
.../rvv/xtheadvector/binop_vx_constraint-1.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-10.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-2.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-3.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-4.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-5.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-6.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-7.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-8.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-9.c | 68 +++++++++++++++++
.../rvv/xtheadvector/rvv-xtheadvector.exp | 41 +++++++++++
17 files changed, 939 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
new file mode 100644
index 00000000000..172dfb6c228
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-1.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+ vint32m1_t v4 = __riscv_vadd_vv_i32m1_tu (v3, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+ vint32m1_t v4 = __riscv_vadd_vv_i32m1_m (mask, v3, v3, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+ vint32m1_t v4 = __riscv_vadd_vv_i32m1_tumu (mask, v3, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
new file mode 100644
index 00000000000..c89635ab85b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-3.c
@@ -0,0 +1,27 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_m (m3, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_m (m3, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
new file mode 100644
index 00000000000..3cca8a47ef1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-4.c
@@ -0,0 +1,27 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_mu (m3, m3, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_mu (m3, m3, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
new file mode 100644
index 00000000000..45a679b424c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-5.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_mu (mask, m3, v, v, 4);
+ m4 = __riscv_vmseq_vv_i32m1_b32_m (m4, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_mu (mask, m3, v, v, 4);
+ m4 = __riscv_vmslt_vv_i32m1_b32_m (m4, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vmv} 2 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
new file mode 100644
index 00000000000..1ef85d556d9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-6.c
@@ -0,0 +1,28 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = __riscv_vlm_v_b32 (in, 4);
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v2, 4);
+ vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_mu (m3, mask, v, v, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = __riscv_vlm_v_b32 (in, 4);
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v2, 4);
+ vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_mu (m3, mask, v, v, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vmv} 2 } } */
+
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
new file mode 100644
index 00000000000..bbef0d43664
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vv_constraint-7.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmseq_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmseq_vv_i32m1_b32_m (m3, v2, v2, 4);
+ m4 = __riscv_vmseq_vv_i32m1_b32_m (m4, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vbool32_t m3 = __riscv_vmslt_vv_i32m1_b32 (v, v, 4);
+ vbool32_t m4 = __riscv_vmslt_vv_i32m1_b32_m (m3, v2, v2, 4);
+ m4 = __riscv_vmslt_vv_i32m1_b32_m (m4, v2, v2, 4);
+ __riscv_vsm_v_b32 (out, m4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
new file mode 100644
index 00000000000..ed9b0c7c01f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-1.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
new file mode 100644
index 00000000000..89616f3d3b0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-10.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
new file mode 100644
index 00000000000..e64543b1aac
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-2.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
new file mode 100644
index 00000000000..4775a4af325
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-3.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
new file mode 100644
index 00000000000..6dd00c8b3b6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-4.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
new file mode 100644
index 00000000000..86606537b14
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-5.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
new file mode 100644
index 00000000000..e7bede15b86
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-6.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
new file mode 100644
index 00000000000..1cd688919f1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-7.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vxor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vxor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
new file mode 100644
index 00000000000..70f525d30ed
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-8.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tu (v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_m (mask, v3, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vxor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vxor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vxor_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vxor_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
new file mode 100644
index 00000000000..0b248b68e0c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-9.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
new file mode 100644
index 00000000000..ffc8fee575f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/rvv-xtheadvector.exp
@@ -0,0 +1,41 @@
+# Copyright (C) 2017-2020 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3. If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+ return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+ set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] \
+ "-I$srcdir/$subdir/../ -std=gnu99 -O2" $DEFAULT_CFLAGS
+
+# All done.
+dg-finish
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2)
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
` (2 preceding siblings ...)
2023-11-17 8:56 ` [PATCH 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
@ 2023-11-17 8:58 ` Jun Sha (Joshua)
2023-11-17 8:59 ` [PATCH 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 8:58 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.
This patch is adding some tests for binary operations.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c: New test.
---
.../rvv/xtheadvector/binop_vx_constraint-11.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-12.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-13.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-14.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-15.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-16.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-17.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-18.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-19.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-20.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-21.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-22.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-23.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-24.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-25.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-26.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-27.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-28.c | 68 +++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-29.c | 73 +++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-30.c | 68 +++++++++++++++++
20 files changed, 1405 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
new file mode 100644
index 00000000000..f9671318a67
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-11.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vand\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vand\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
new file mode 100644
index 00000000000..3e991339a22
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-12.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tu (v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_m (mask, v3, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vand\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vand\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vand_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vand_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
new file mode 100644
index 00000000000..068e9c32511
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-13.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
new file mode 100644
index 00000000000..26af4748453
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-14.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
new file mode 100644
index 00000000000..f19130108df
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-15.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vor\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vor\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
new file mode 100644
index 00000000000..3134d1ebe5c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-16.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tu (v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_m (mask, v3, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vor\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vor\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vor_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vor_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
new file mode 100644
index 00000000000..82e7c668e59
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-17.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmul_vx_i32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmul_vx_i32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmul_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
new file mode 100644
index 00000000000..57c548b25c5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-18.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmul_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmul_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmul\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmul\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vmul_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmul_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
new file mode 100644
index 00000000000..8406970e64e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-19.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmax_vx_i32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmax_vx_i32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmax_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
new file mode 100644
index 00000000000..6b34dfa9c79
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-20.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmax_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmax_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmax\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmax\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vmax_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmax_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
new file mode 100644
index 00000000000..e73bc0f68bc
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-21.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmin_vx_i32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmin_vx_i32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vmin_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
new file mode 100644
index 00000000000..04f2d292bb4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-22.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmin_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmin_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmin\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmin\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vmin_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vmin_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
new file mode 100644
index 00000000000..6ce0d028347
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-23.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
new file mode 100644
index 00000000000..0536eba14b8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-24.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vmaxu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vmaxu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vmaxu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vmaxu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
new file mode 100644
index 00000000000..291b0afdf85
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-25.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vminu_vx_u32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
new file mode 100644
index 00000000000..9c85da5b605
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-26.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vminu_vx_u32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vminu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vminu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vminu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vminu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
new file mode 100644
index 00000000000..bea468b263a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-27.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vdiv_vx_i32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, 5, 4);
+ vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
new file mode 100644
index 00000000000..2640324cb4d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-28.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vdiv_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vdiv\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdiv\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vdiv_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vdiv_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
new file mode 100644
index 00000000000..66361ad567d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-29.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
new file mode 100644
index 00000000000..901e03bc181
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-30.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3)
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
` (3 preceding siblings ...)
2023-11-17 8:58 ` [PATCH 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
@ 2023-11-17 8:59 ` Jun Sha (Joshua)
2023-11-17 9:00 ` [PATCH 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 8:59 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.
This patch is adding some tests for binary operations.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c: New test.
* gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c: New test.
---
.../rvv/xtheadvector/binop_vx_constraint-31.c | 73 +++++++++++
.../rvv/xtheadvector/binop_vx_constraint-32.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-33.c | 73 +++++++++++
.../rvv/xtheadvector/binop_vx_constraint-34.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-35.c | 73 +++++++++++
.../rvv/xtheadvector/binop_vx_constraint-36.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-37.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-38.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-39.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-40.c | 73 +++++++++++
.../rvv/xtheadvector/binop_vx_constraint-41.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-42.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-43.c | 68 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-44.c | 73 +++++++++++
.../rvv/xtheadvector/binop_vx_constraint-45.c | 123 ++++++++++++++++++
.../rvv/xtheadvector/binop_vx_constraint-46.c | 72 ++++++++++
.../rvv/xtheadvector/binop_vx_constraint-47.c | 16 +++
.../rvv/xtheadvector/binop_vx_constraint-48.c | 16 +++
.../rvv/xtheadvector/binop_vx_constraint-49.c | 16 +++
.../rvv/xtheadvector/binop_vx_constraint-50.c | 18 +++
20 files changed, 1238 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
new file mode 100644
index 00000000000..66361ad567d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-31.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
new file mode 100644
index 00000000000..901e03bc181
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-32.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vdivu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vdivu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vdivu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vdivu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
new file mode 100644
index 00000000000..651244f7a0d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-33.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
new file mode 100644
index 00000000000..25460cd3f17
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-34.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
new file mode 100644
index 00000000000..651244f7a0d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-35.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, 5, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, 5, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
new file mode 100644
index 00000000000..25460cd3f17
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-36.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vremu\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vremu\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_vle32_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_vle32_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vremu_vx_u32m1 (v2, x, 4);
+ vuint32m1_t v4 = __riscv_vremu_vx_u32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_u32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
new file mode 100644
index 00000000000..aca803f3930
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-37.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
new file mode 100644
index 00000000000..ce9261f67e3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-38.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, -15, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_tu (v3, v2, -15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, -15, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_m (mask, v3, -15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, -15, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_tumu (mask, v3, v2, -15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
new file mode 100644
index 00000000000..3adb7ae8f79
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-39.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_tu (v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_m (mask, v3, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vsub_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vsub_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
new file mode 100644
index 00000000000..995b52130cb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-40.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 17, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, 17, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 17, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, 17, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, 17, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, 17, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
new file mode 100644
index 00000000000..7c4b1e78ca3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-41.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
new file mode 100644
index 00000000000..b971a9af222
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-42.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
new file mode 100644
index 00000000000..ae23fa67f02
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-43.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vrsub\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*15,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 15, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, 15, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
new file mode 100644
index 00000000000..120230d1f2c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-44.c
@@ -0,0 +1,73 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tu (v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_m (mask, v3, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** ...
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vrsub_vx_i32m1 (v2, 16, 4);
+ vint32m1_t v4 = __riscv_vrsub_vx_i32m1_tumu (mask, v3, v2, 16, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
new file mode 100644
index 00000000000..cec8a0b8012
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-45.c
@@ -0,0 +1,123 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcxtheadvector -mabi=lp64d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, -16, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, -16, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 15, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 15, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 16, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 16, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAA, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAAA, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, x, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, x, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
new file mode 100644
index 00000000000..7210890f20f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-46.c
@@ -0,0 +1,72 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, -16, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, -16, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*15
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 15, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 15, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 16, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 16, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAA, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1 (v3, 0xAAAAAAA, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
new file mode 100644
index 00000000000..0351e452d5f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-47.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAA, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, 0xAAAAAAAA, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
new file mode 100644
index 00000000000..3b849e906db
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-48.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, 0xAAAAAAAAAAAAAAAA, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
new file mode 100644
index 00000000000..4a18a410252
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-49.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, x, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, x, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
new file mode 100644
index 00000000000..6713316fcab
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/binop_vx_constraint-50.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+#include "riscv_th_vector.h"
+
+void f (void * in, void *out, int32_t x, int n)
+{
+ for (int i = 0; i < n; i++) {
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + i + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + i + 2, 4);
+ vint64m1_t v3 = __riscv_vadd_vx_i64m1 (v2, x, 4);
+ vint64m1_t v4 = __riscv_vadd_vx_i64m1_tu (v3, v2, x, 4);
+ __riscv_vse64_v_i64m1 (out + i + 2, v4, 4);
+ }
+}
+
+/* { dg-final { scan-assembler-times {th.vlse\.v\s+v[0-9]+,\s*0\([a-x0-9]+\),\s*zero\s+\.L[0-9]+\:\s+} 1 } } */
+/* { dg-final { scan-assembler-times {th.vadd\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 2 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4)
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
` (4 preceding siblings ...)
2023-11-17 8:59 ` [PATCH 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
@ 2023-11-17 9:00 ` Jun Sha (Joshua)
2023-11-17 9:01 ` [PATCH 7/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part5) Jun Sha (Joshua)
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 9:00 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.
This patch is adding some tests for ternary and unary operations.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c: New test.
* gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c: New test.
* gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c: New test.
---
.../rvv/xtheadvector/ternop_vv_constraint-1.c | 83 +++++++++++
.../rvv/xtheadvector/ternop_vv_constraint-2.c | 83 +++++++++++
.../rvv/xtheadvector/ternop_vv_constraint-3.c | 83 +++++++++++
.../rvv/xtheadvector/ternop_vv_constraint-4.c | 83 +++++++++++
.../rvv/xtheadvector/ternop_vv_constraint-5.c | 83 +++++++++++
.../rvv/xtheadvector/ternop_vv_constraint-6.c | 83 +++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-1.c | 71 ++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-2.c | 38 +++++
.../rvv/xtheadvector/ternop_vx_constraint-3.c | 125 +++++++++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-4.c | 123 +++++++++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-5.c | 123 +++++++++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-6.c | 130 ++++++++++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-7.c | 130 ++++++++++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-8.c | 71 ++++++++++
.../rvv/xtheadvector/ternop_vx_constraint-9.c | 71 ++++++++++
.../rvv/xtheadvector/unop_v_constraint-1.c | 68 +++++++++
16 files changed, 1448 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
new file mode 100644
index 00000000000..d98755e7040
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-1.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vv_i32m1 (v, v2, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vv_i32m1(v3, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vv_i32m1_tu (v, v2, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vv_i32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vv_i32m1_m (m, v, v2, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vv_i32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vmacc_vv_i32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
new file mode 100644
index 00000000000..e9d2c7f10a5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-2.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmadd_vv_i32m1 (v, v2, v2, 4);
+ vint32m1_t v4 = __riscv_vmadd_vv_i32m1(v3, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmadd_vv_i32m1_tu (v, v2, v2, 4);
+ vint32m1_t v4 = __riscv_vmadd_vv_i32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmadd_vv_i32m1_m (m, v, v2, v2, 4);
+ vint32m1_t v4 = __riscv_vmadd_vv_i32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vmadd_vv_i32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
new file mode 100644
index 00000000000..2f70761558d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-3.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
new file mode 100644
index 00000000000..0ba9c866b32
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-4.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmadd_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmadd_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
new file mode 100644
index 00000000000..e913cfe9ef8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-5.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmacc_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
new file mode 100644
index 00000000000..ced00a2e43e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vv_constraint-6.c
@@ -0,0 +1,83 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1 (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1(v3, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1 (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1_tu (v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1_tu(v3, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_tu (v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vfnma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void * in3, void *out)
+{
+ vbool32_t m = __riscv_vlm_v_b32 (in3, 4);
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmadd_vv_f32m1_m (m, v, v2, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmadd_vv_f32m1_m(m, v3, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ v4 = __riscv_vfnmadd_vv_f32m1_m (m, v4, v2, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
new file mode 100644
index 00000000000..34e6fe355a3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-1.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tumu (mask, v3, x, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
new file mode 100644
index 00000000000..290981625bf
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-2.c
@@ -0,0 +1,38 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+#include "riscv_th_vector.h"
+
+void f1 (void * in, void * in2, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1 (in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+void f2 (void * in, void * in2, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+void f3 (void * in, void * in2, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in2, 4);
+ vint32m1_t v3 = __riscv_vmacc_vx_i32m1 (v, x, v2, 4);
+ vint32m1_t v4 = __riscv_vmacc_vx_i32m1_tumu (mask, v3, x, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-times {th.vma[c-d][c-d]\.vx\s+v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+\s+} 5 } } */
+/* { dg-final { scan-assembler-times {th.vma[c-d][c-d]\.vx\s+v[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,\s*v0.t} 1 } } */
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
new file mode 100644
index 00000000000..491cd2d42af
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-3.c
@@ -0,0 +1,125 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64gcxtheadvector -mabi=lp64d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, -16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, -16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 15, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 15, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, x, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, x, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
new file mode 100644
index 00000000000..70f249bfc8b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-4.c
@@ -0,0 +1,123 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, -16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, -16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 15, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 15, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1 (v2, x, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1 (v3, x, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
new file mode 100644
index 00000000000..3de929de136
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-5.c
@@ -0,0 +1,123 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, -16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, -16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 15, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 15, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 0xAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 0xAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** ...
+** ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 0xAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 0xAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tu (v2, x, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tu (v3, x, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
new file mode 100644
index 00000000000..ceef8794297
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-6.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask, v2, -16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask, v3, -16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 15, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 15, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 0xAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 0xAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 0xAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 0xAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_m (mask,v2, x, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_m (mask,v3, x, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
new file mode 100644
index 00000000000..6e524489176
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-7.c
@@ -0,0 +1,130 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32 -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f0:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f0 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask, v2, -16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask, v3, -16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f1:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f1 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 15, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 15, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f2 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 16, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 16, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f3 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 0xAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 0xAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f4:
+** ...
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vx\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** ...
+** ret
+*/
+void f4 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 0xAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 0xAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f5:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** ...
+*/
+void f5 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, 0xAAAAAAAAAAAAAAAA, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, 0xAAAAAAAAAAAAAAAA, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/*
+** f6:
+** ...
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** th.vma[c-d][c-d]\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+,v0.t
+** ...
+*/
+void f6 (void * in, void *out, int64_t x, int n)
+{
+ vbool64_t mask = __riscv_vlm_v_b64 (in + 100, 4);
+ vint64m1_t v = __riscv_vle64_v_i64m1 (in + 1, 4);
+ vint64m1_t v2 = __riscv_vle64_v_i64m1_tu (v, in + 2, 4);
+ vint64m1_t v3 = __riscv_vmacc_vx_i64m1_tumu (mask,v2, x, v2, 4);
+ vint64m1_t v4 = __riscv_vmacc_vx_i64m1_tumu (mask,v3, x, v3, 4);
+ __riscv_vse64_v_i64m1 (out + 2, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
new file mode 100644
index 00000000000..16f03203276
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-8.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out, float x)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vfma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfmacc_vf_f32m1_tumu (mask, v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
new file mode 100644
index 00000000000..13bd7f762f2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/ternop_vx_constraint-9.c
@@ -0,0 +1,71 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_th_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void * in2, void *out, float x)
+{
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1 (in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tu (v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vle.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+
+** th.vfnma[c-d][c-d]\.vf\tv[0-9]+,\s*[a-x0-9]+,\s*v[0-9]+,v0.t
+** th.vse.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void * in2, void *out, float x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vfloat32m1_t v = __riscv_vle32_v_f32m1 (in, 4);
+ vfloat32m1_t v2 = __riscv_vle32_v_f32m1_m (mask, in2, 4);
+ vfloat32m1_t v3 = __riscv_vfnmacc_vf_f32m1 (v, x, v2, 4);
+ vfloat32m1_t v4 = __riscv_vfnmacc_vf_f32m1_tumu (mask, v3, x, v2, 4);
+ __riscv_vse32_v_f32m1 (out, v4, 4);
+}
+
+/* { dg-final { scan-assembler-not {th.vmv} } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
new file mode 100644
index 00000000000..95b35d3ad36
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/unop_v_constraint-1.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out)
+{
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vneg_v_i32m1 (v2, 4);
+ vint32m1_t v4 = __riscv_vneg_v_i32m1_tu (v3, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vneg_v_i32m1 (v2, 4);
+ vint32m1_t v4 = __riscv_vneg_v_i32m1_m (mask, v3, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vle\.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vrsub\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vse\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_vle32_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_vle32_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vneg_v_i32m1 (v2, 4);
+ vint32m1_t v4 = __riscv_vneg_v_i32m1_tumu (mask, v3, v2, 4);
+ __riscv_vse32_v_i32m1 (out, v4, 4);
+}
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 7/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part5)
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
` (5 preceding siblings ...)
2023-11-17 9:00 ` [PATCH 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
@ 2023-11-17 9:01 ` Jun Sha (Joshua)
2023-11-17 9:02 ` [PATCH 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
2023-11-17 9:03 ` [PATCH 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 9:01 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
For big changes in instruction generation, we can only duplicate
some typical tests in testsuite/gcc.target/riscv/rvv/base.
This patch is adding some tests for auto-vectorization.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c: New test.
* gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h: New test.
---
.../rvv/xtheadvector/autovec/vadd-run-nofm.c | 4 +
.../riscv/rvv/xtheadvector/autovec/vadd-run.c | 81 +++++++++++++++++++
.../xtheadvector/autovec/vadd-rv32gcv-nofm.c | 10 +++
.../rvv/xtheadvector/autovec/vadd-rv32gcv.c | 8 ++
.../xtheadvector/autovec/vadd-rv64gcv-nofm.c | 10 +++
.../rvv/xtheadvector/autovec/vadd-rv64gcv.c | 8 ++
.../rvv/xtheadvector/autovec/vadd-template.h | 70 ++++++++++++++++
.../rvv/xtheadvector/autovec/vadd-zvfh-run.c | 54 +++++++++++++
.../riscv/rvv/xtheadvector/autovec/vand-run.c | 75 +++++++++++++++++
.../rvv/xtheadvector/autovec/vand-rv32gcv.c | 7 ++
.../rvv/xtheadvector/autovec/vand-rv64gcv.c | 7 ++
.../rvv/xtheadvector/autovec/vand-template.h | 61 ++++++++++++++
12 files changed, 395 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c
new file mode 100644
index 00000000000..b6328d0ad65
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run-nofm.c
@@ -0,0 +1,4 @@
+/* { dg-do run { target { riscv_v } } } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model --param=riscv-autovec-preference=scalable" } */
+
+#include "vadd-run.c"
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c
new file mode 100644
index 00000000000..ba453d18c66
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-run.c
@@ -0,0 +1,81 @@
+/* { dg-do run { target { riscv_v } } } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model --param=riscv-autovec-preference=fixed-vlmax -ffast-math" } */
+
+#include "vadd-template.h"
+
+#include <assert.h>
+
+#define SZ 512
+
+#define RUN(TYPE,VAL) \
+ TYPE a##TYPE[SZ]; \
+ TYPE b##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ { \
+ a##TYPE[i] = 0; \
+ b##TYPE[i] = VAL; \
+ } \
+ vadd_##TYPE (a##TYPE, a##TYPE, b##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (a##TYPE[i] == VAL);
+
+#define RUN2(TYPE,VAL) \
+ TYPE as##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ as##TYPE[i] = 0; \
+ vadds_##TYPE (as##TYPE, as##TYPE, VAL, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (as##TYPE[i] == VAL);
+
+#define RUN3(TYPE,VAL) \
+ TYPE ai##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ ai##TYPE[i] = VAL; \
+ vaddi_##TYPE (ai##TYPE, ai##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (ai##TYPE[i] == VAL + 15);
+
+#define RUN3M(TYPE,VAL) \
+ TYPE aim##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ aim##TYPE[i] = VAL; \
+ vaddim_##TYPE (aim##TYPE, aim##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (aim##TYPE[i] == VAL - 16);
+
+#define RUN_ALL() \
+ RUN(int8_t, -1) \
+ RUN(uint8_t, 2) \
+ RUN(int16_t, -1) \
+ RUN(uint16_t, 2) \
+ RUN(int32_t, -3) \
+ RUN(uint32_t, 4) \
+ RUN(int64_t, -5) \
+ RUN(uint64_t, 6) \
+ RUN(float, -5) \
+ RUN(double, 6) \
+ RUN2(int8_t, -7) \
+ RUN2(uint8_t, 8) \
+ RUN2(int16_t, -7) \
+ RUN2(uint16_t, 8) \
+ RUN2(int32_t, -9) \
+ RUN2(uint32_t, 10) \
+ RUN2(int64_t, -11) \
+ RUN2(uint64_t, 12) \
+ RUN2(float, -11) \
+ RUN2(double, 12) \
+ RUN3M(int8_t, 13) \
+ RUN3(uint8_t, 14) \
+ RUN3M(int16_t, 13) \
+ RUN3(uint16_t, 14) \
+ RUN3M(int32_t, 15) \
+ RUN3(uint32_t, 16) \
+ RUN3M(int64_t, 17) \
+ RUN3(uint64_t, 18) \
+ RUN3(float, 17) \
+ RUN3M(double, 18) \
+
+int main ()
+{
+ RUN_ALL()
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c
new file mode 100644
index 00000000000..eef83196be5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv-nofm.c
@@ -0,0 +1,10 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model -march=rv32gc_zvfh_xtheadvector -mabi=ilp32d --param=riscv-autovec-preference=scalable -fdump-tree-optimized-details" } */
+
+#include "vadd-template.h"
+
+/* { dg-final { scan-assembler-times {\tvadd\.vv} 16 } } */
+/* { dg-final { scan-assembler-times {\tvadd\.vi} 8 } } */
+/* { dg-final { scan-assembler-times {\tvfadd\.vv} 9 } } */
+
+/* { dg-final { scan-tree-dump-times "\.COND_LEN_ADD" 9 "optimized" } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c
new file mode 100644
index 00000000000..7c9e857cc46
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv32gcv.c
@@ -0,0 +1,8 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model -march=rv32gc_zvfh_xtheadvector -mabi=ilp32d --param=riscv-autovec-preference=fixed-vlmax -ffast-math" } */
+
+#include "vadd-template.h"
+
+/* { dg-final { scan-assembler-times {\tvadd\.vv} 16 } } */
+/* { dg-final { scan-assembler-times {\tvadd\.vi} 8 } } */
+/* { dg-final { scan-assembler-times {\tvfadd\.vv} 9 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c
new file mode 100644
index 00000000000..6a51d657013
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv-nofm.c
@@ -0,0 +1,10 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model -march=rv64gc_zvfh_xtheadvector -mabi=lp64d --param=riscv-autovec-preference=scalable -fdump-tree-optimized-details" } */
+
+#include "vadd-template.h"
+
+/* { dg-final { scan-assembler-times {\tvadd\.vv} 16 } } */
+/* { dg-final { scan-assembler-times {\tvadd\.vi} 8 } } */
+/* { dg-final { scan-assembler-times {\tvfadd\.vv} 9 } } */
+
+/* { dg-final { scan-tree-dump-times "\.COND_LEN_ADD" 9 "optimized" } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c
new file mode 100644
index 00000000000..62250731536
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-rv64gcv.c
@@ -0,0 +1,8 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model -march=rv64gc_zvfh_xtheadvector -mabi=lp64d --param=riscv-autovec-preference=fixed-vlmax -ffast-math" } */
+
+#include "vadd-template.h"
+
+/* { dg-final { scan-assembler-times {\tvadd\.vv} 16 } } */
+/* { dg-final { scan-assembler-times {\tvadd\.vi} 8 } } */
+/* { dg-final { scan-assembler-times {\tvfadd\.vv} 9 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h
new file mode 100644
index 00000000000..e05b9c76275
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-template.h
@@ -0,0 +1,70 @@
+#include <stdint-gcc.h>
+
+#define TEST_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vadd_##TYPE (TYPE *dst, TYPE *a, TYPE *b, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] + b[i]; \
+ }
+
+#define TEST2_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vadds_##TYPE (TYPE *dst, TYPE *a, TYPE b, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] + b; \
+ }
+
+#define TEST3_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vaddi_##TYPE (TYPE *dst, TYPE *a, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] + 15; \
+ }
+
+#define TEST3M_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vaddim_##TYPE (TYPE *dst, TYPE *a, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] - 16; \
+ }
+
+#define TEST_ALL() \
+ TEST_TYPE(int8_t) \
+ TEST_TYPE(uint8_t) \
+ TEST_TYPE(int16_t) \
+ TEST_TYPE(uint16_t) \
+ TEST_TYPE(int32_t) \
+ TEST_TYPE(uint32_t) \
+ TEST_TYPE(int64_t) \
+ TEST_TYPE(uint64_t) \
+ TEST_TYPE(_Float16) \
+ TEST_TYPE(float) \
+ TEST_TYPE(double) \
+ TEST2_TYPE(int8_t) \
+ TEST2_TYPE(uint8_t) \
+ TEST2_TYPE(int16_t) \
+ TEST2_TYPE(uint16_t) \
+ TEST2_TYPE(int32_t) \
+ TEST2_TYPE(uint32_t) \
+ TEST2_TYPE(int64_t) \
+ TEST2_TYPE(uint64_t) \
+ TEST2_TYPE(_Float16) \
+ TEST2_TYPE(float) \
+ TEST2_TYPE(double) \
+ TEST3M_TYPE(int8_t) \
+ TEST3_TYPE(uint8_t) \
+ TEST3M_TYPE(int16_t) \
+ TEST3_TYPE(uint16_t) \
+ TEST3M_TYPE(int32_t) \
+ TEST3_TYPE(uint32_t) \
+ TEST3M_TYPE(int64_t) \
+ TEST3_TYPE(uint64_t) \
+ TEST3M_TYPE(_Float16) \
+ TEST3_TYPE(float) \
+ TEST3M_TYPE(double) \
+
+TEST_ALL()
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c
new file mode 100644
index 00000000000..2a8618ad09b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vadd-zvfh-run.c
@@ -0,0 +1,54 @@
+/* { dg-do run { target { riscv_v && riscv_zvfh_hw } } } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model --param=riscv-autovec-preference=fixed-vlmax -ffast-math" } */
+
+#include "vadd-template.h"
+
+#include <assert.h>
+
+#define SZ 512
+
+#define RUN(TYPE,VAL) \
+ TYPE a##TYPE[SZ]; \
+ TYPE b##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ { \
+ a##TYPE[i] = 0; \
+ b##TYPE[i] = VAL; \
+ } \
+ vadd_##TYPE (a##TYPE, a##TYPE, b##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (a##TYPE[i] == VAL);
+
+#define RUN2(TYPE,VAL) \
+ TYPE as##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ as##TYPE[i] = 0; \
+ vadds_##TYPE (as##TYPE, as##TYPE, VAL, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (as##TYPE[i] == VAL);
+
+#define RUN3(TYPE,VAL) \
+ TYPE ai##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ ai##TYPE[i] = VAL; \
+ vaddi_##TYPE (ai##TYPE, ai##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (ai##TYPE[i] == VAL + 15);
+
+#define RUN3M(TYPE,VAL) \
+ TYPE aim##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ aim##TYPE[i] = VAL; \
+ vaddim_##TYPE (aim##TYPE, aim##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (aim##TYPE[i] == VAL - 16);
+
+#define RUN_ALL() \
+ RUN(_Float16, 4) \
+ RUN2(_Float16, 10) \
+ RUN3M(_Float16, 17) \
+
+int main ()
+{
+ RUN_ALL()
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c
new file mode 100644
index 00000000000..848b6eb77f6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-run.c
@@ -0,0 +1,75 @@
+/* { dg-do run { target { riscv_v } } } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model --param=riscv-autovec-preference=fixed-vlmax" } */
+
+#include "vand-template.h"
+
+#include <assert.h>
+
+#define SZ 512
+
+#define RUN(TYPE,VAL) \
+ TYPE a##TYPE[SZ]; \
+ TYPE b##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ { \
+ a##TYPE[i] = 123; \
+ b##TYPE[i] = VAL; \
+ } \
+ vand_##TYPE (a##TYPE, a##TYPE, b##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (a##TYPE[i] == (TYPE)(123 & VAL));
+
+#define RUN2(TYPE,VAL) \
+ TYPE as##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ as##TYPE[i] = 123; \
+ vands_##TYPE (as##TYPE, as##TYPE, VAL, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (as##TYPE[i] == (123 & VAL));
+
+#define RUN3(TYPE,VAL) \
+ TYPE ai##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ ai##TYPE[i] = VAL; \
+ vandi_##TYPE (ai##TYPE, ai##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (ai##TYPE[i] == (VAL & 15));
+
+#define RUN3M(TYPE,VAL) \
+ TYPE aim##TYPE[SZ]; \
+ for (int i = 0; i < SZ; i++) \
+ aim##TYPE[i] = VAL; \
+ vandim_##TYPE (aim##TYPE, aim##TYPE, SZ); \
+ for (int i = 0; i < SZ; i++) \
+ assert (aim##TYPE[i] == (VAL & -16));
+
+#define RUN_ALL() \
+ RUN(int8_t, -1) \
+ RUN(uint8_t, 2) \
+ RUN(int16_t, -1) \
+ RUN(uint16_t, 2) \
+ RUN(int32_t, -3) \
+ RUN(uint32_t, 4) \
+ RUN(int64_t, -5) \
+ RUN(uint64_t, 6) \
+ RUN2(int8_t, -7) \
+ RUN2(uint8_t, 8) \
+ RUN2(int16_t, -7) \
+ RUN2(uint16_t, 8) \
+ RUN2(int32_t, -9) \
+ RUN2(uint32_t, 10) \
+ RUN2(int64_t, -11) \
+ RUN2(uint64_t, 12) \
+ RUN3M(int8_t, 13) \
+ RUN3(uint8_t, 14) \
+ RUN3M(int16_t, 13) \
+ RUN3(uint16_t, 14) \
+ RUN3M(int32_t, 15) \
+ RUN3(uint32_t, 16) \
+ RUN3M(int64_t, 17) \
+ RUN3(uint64_t, 18)
+
+int main ()
+{
+ RUN_ALL()
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c
new file mode 100644
index 00000000000..058dc9d86df
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv32gcv.c
@@ -0,0 +1,7 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model -march=rv32gcxtheadvector -mabi=ilp32d --param=riscv-autovec-preference=fixed-vlmax" } */
+
+#include "vand-template.h"
+
+/* { dg-final { scan-assembler-times {\tvand\.vv} 16 } } */
+/* { dg-final { scan-assembler-times {\tvand\.vi} 8 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c
new file mode 100644
index 00000000000..def833e8b48
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-rv64gcv.c
@@ -0,0 +1,7 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-std=c99 -fno-vect-cost-model -march=rv64gcxtheadvector -mabi=lp64d --param=riscv-autovec-preference=fixed-vlmax" } */
+
+#include "vand-template.h"
+
+/* { dg-final { scan-assembler-times {\tvand\.vv} 16 } } */
+/* { dg-final { scan-assembler-times {\tvand\.vi} 8 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h
new file mode 100644
index 00000000000..e2409594f39
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/autovec/vand-template.h
@@ -0,0 +1,61 @@
+#include <stdint-gcc.h>
+
+#define TEST_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vand_##TYPE (TYPE *dst, TYPE *a, TYPE *b, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] & b[i]; \
+ }
+
+#define TEST2_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vands_##TYPE (TYPE *dst, TYPE *a, TYPE b, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] & b; \
+ }
+
+#define TEST3_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vandi_##TYPE (TYPE *dst, TYPE *a, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] & 15; \
+ }
+
+#define TEST3M_TYPE(TYPE) \
+ __attribute__((noipa)) \
+ void vandim_##TYPE (TYPE *dst, TYPE *a, int n) \
+ { \
+ for (int i = 0; i < n; i++) \
+ dst[i] = a[i] & -16; \
+ }
+
+#define TEST_ALL() \
+ TEST_TYPE(int8_t) \
+ TEST_TYPE(uint8_t) \
+ TEST_TYPE(int16_t) \
+ TEST_TYPE(uint16_t) \
+ TEST_TYPE(int32_t) \
+ TEST_TYPE(uint32_t) \
+ TEST_TYPE(int64_t) \
+ TEST_TYPE(uint64_t) \
+ TEST2_TYPE(int8_t) \
+ TEST2_TYPE(uint8_t) \
+ TEST2_TYPE(int16_t) \
+ TEST2_TYPE(uint16_t) \
+ TEST2_TYPE(int32_t) \
+ TEST2_TYPE(uint32_t) \
+ TEST2_TYPE(int64_t) \
+ TEST2_TYPE(uint64_t) \
+ TEST3M_TYPE(int8_t) \
+ TEST3_TYPE(uint8_t) \
+ TEST3M_TYPE(int16_t) \
+ TEST3_TYPE(uint16_t) \
+ TEST3M_TYPE(int32_t) \
+ TEST3_TYPE(uint32_t) \
+ TEST3M_TYPE(int64_t) \
+ TEST3_TYPE(uint64_t)
+
+TEST_ALL()
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
` (6 preceding siblings ...)
2023-11-17 9:01 ` [PATCH 7/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part5) Jun Sha (Joshua)
@ 2023-11-17 9:02 ` Jun Sha (Joshua)
2023-11-17 9:03 ` [PATCH 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 9:02 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
This patch only involves the generation of xtheadvector
special load/store instructions.
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class th_loadstore_width): Define new builtin bases.
(BASE): Define new builtin bases.
* config/riscv/riscv-vector-builtins-bases.h:
Define new builtin class.
* config/riscv/riscv-vector-builtins-functions.def (vlsegff):
Include thead-vector-builtins-functions.def.
* config/riscv/riscv-vector-builtins-shapes.cc
(struct th_loadstore_width_def): Define new builtin shapes.
(struct th_indexed_loadstore_width_def):
Define new builtin shapes.
(SHAPE): Define new builtin shapes.
* config/riscv/riscv-vector-builtins-shapes.h:
Define new builtin shapes.
* config/riscv/riscv-vector-builtins-types.def
(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
(vint8m1_t): Add datatypes for XTheadVector.
(vint8m2_t): Likewise.
(vint8m4_t): Likewise.
(vint8m8_t): Likewise.
(vint16m1_t): Likewise.
(vint16m2_t): Likewise.
(vint16m4_t): Likewise.
(vint16m8_t): Likewise.
(vint32m1_t): Likewise.
(vint32m2_t): Likewise.
(vint32m4_t): Likewise.
(vint32m8_t): Likewise.
(vint64m1_t): Likewise.
(vint64m2_t): Likewise.
(vint64m4_t): Likewise.
(vint64m8_t): Likewise.
(vuint8m1_t): Likewise.
(vuint8m2_t): Likewise.
(vuint8m4_t): Likewise.
(vuint8m8_t): Likewise.
(vuint16m1_t): Likewise.
(vuint16m2_t): Likewise.
(vuint16m4_t): Likewise.
(vuint16m8_t): Likewise.
(vuint32m1_t): Likewise.
(vuint32m2_t): Likewise.
(vuint32m4_t): Likewise.
(vuint32m8_t): Likewise.
(vuint64m1_t): Likewise.
(vuint64m2_t): Likewise.
(vuint64m4_t): Likewise.
(vuint64m8_t): Likewise.
* config/riscv/riscv-vector-builtins.cc
(DEF_RVV_I8_OPS): Add datatypes for XTheadVector.
(DEF_RVV_I16_OPS): Add datatypes for XTheadVector.
(DEF_RVV_I32_OPS): Add datatypes for XTheadVector.
(DEF_RVV_U8_OPS): Add datatypes for XTheadVector.
(DEF_RVV_U16_OPS): Add datatypes for XTheadVector.
(DEF_RVV_U32_OPS): Add datatypes for XTheadVector.
* config/riscv/vector.md: Include thead-vector.md.
* config/riscv/thead-vector-builtins-functions.def: New file.
* config/riscv/thead-vector.md: New file.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c: New test.
* gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c: New test.
* gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c: New test.
* gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c: New test.
* gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c: New test.
* gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c: New test.
---
.../riscv/riscv-vector-builtins-bases.cc | 122 +++++++
.../riscv/riscv-vector-builtins-bases.h | 30 ++
.../riscv/riscv-vector-builtins-functions.def | 2 +
.../riscv/riscv-vector-builtins-shapes.cc | 100 ++++++
.../riscv/riscv-vector-builtins-shapes.h | 2 +
.../riscv/riscv-vector-builtins-types.def | 120 +++++++
gcc/config/riscv/riscv-vector-builtins.cc | 300 +++++++++++++++++-
.../riscv/thead-vector-builtins-functions.def | 30 ++
gcc/config/riscv/thead-vector.md | 235 ++++++++++++++
gcc/config/riscv/vector.md | 1 +
.../riscv/rvv/xtheadvector/vlb-vsb.c | 68 ++++
.../riscv/rvv/xtheadvector/vlbu-vsb.c | 68 ++++
.../riscv/rvv/xtheadvector/vlh-vsh.c | 68 ++++
.../riscv/rvv/xtheadvector/vlhu-vsh.c | 68 ++++
.../riscv/rvv/xtheadvector/vlw-vsw.c | 68 ++++
.../riscv/rvv/xtheadvector/vlwu-vsw.c | 68 ++++
16 files changed, 1349 insertions(+), 1 deletion(-)
create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def
create mode 100644 gcc/config/riscv/thead-vector.md
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.cc b/gcc/config/riscv/riscv-vector-builtins-bases.cc
index d70468542ee..186bc4a9bf1 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.cc
@@ -268,6 +268,66 @@ public:
}
};
+/* Implements
+ * th.vl(b/h/w)[u].v/th.vs(b/h/w)[u].v/th.vls(b/h/w)[u].v/th.vss(b/h/w)[u].v/
+ * th.vlx(b/h/w)[u].v/th.vs[u]x(b/h/w).v
+ * codegen. */
+template<bool STORE_P, lst_type LST_TYPE, int UNSPEC>
+class th_loadstore_width : public function_base
+{
+public:
+ bool apply_tail_policy_p () const override { return !STORE_P; }
+ bool apply_mask_policy_p () const override { return !STORE_P; }
+
+ unsigned int call_properties (const function_instance &) const override
+ {
+ if (STORE_P)
+ return CP_WRITE_MEMORY;
+ else
+ return CP_READ_MEMORY;
+ }
+
+ bool can_be_overloaded_p (enum predication_type_index pred) const override
+ {
+ if (STORE_P || LST_TYPE == LST_INDEXED)
+ return true;
+ return pred != PRED_TYPE_none;
+ }
+
+ rtx expand (function_expander &e) const override
+ {
+ gcc_assert (TARGET_XTHEADVECTOR);
+ if (LST_TYPE == LST_INDEXED)
+ {
+ if (STORE_P)
+ return e.use_exact_insn (
+ code_for_pred_indexed_store_width (UNSPEC, UNSPEC,
+ e.vector_mode ()));
+ else
+ return e.use_exact_insn (
+ code_for_pred_indexed_load_width (UNSPEC, e.vector_mode ()));
+ }
+ else if (LST_TYPE == LST_STRIDED)
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_strided_store_width (UNSPEC, e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_strided_load_width (UNSPEC, e.vector_mode ()));
+ }
+ else
+ {
+ if (STORE_P)
+ return e.use_contiguous_store_insn (
+ code_for_pred_store_width (UNSPEC, e.vector_mode ()));
+ else
+ return e.use_contiguous_load_insn (
+ code_for_pred_mov_width (UNSPEC, e.vector_mode ()));
+ }
+ }
+};
+
/* Implements
vadd/vsub/vand/vor/vxor/vsll/vsra/vsrl/
vmin/vmax/vminu/vmaxu/vdiv/vrem/vdivu/
@@ -2384,6 +2444,37 @@ static CONSTEXPR const seg_indexed_store<UNSPEC_UNORDERED> vsuxseg_obj;
static CONSTEXPR const seg_indexed_store<UNSPEC_ORDERED> vsoxseg_obj;
static CONSTEXPR const vlsegff vlsegff_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vlb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLBU> th_vlbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vlh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLHU> th_vlhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vlw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_UNIT_STRIDE, UNSPEC_TH_VLWU> th_vlwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLB> th_vsb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLH> th_vsh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_UNIT_STRIDE, UNSPEC_TH_VLW> th_vsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSB> th_vlsb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSBU> th_vlsbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSH> th_vlsh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSHU> th_vlshu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSW> th_vlsw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_STRIDED, UNSPEC_TH_VLSWU> th_vlswu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSB> th_vssb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSH> th_vssh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_STRIDED, UNSPEC_TH_VLSW> th_vssw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXB> th_vlxb_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXBU> th_vlxbu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXH> th_vlxh_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXHU> th_vlxhu_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXW> th_vlxw_obj;
+static CONSTEXPR const th_loadstore_width<false, LST_INDEXED, UNSPEC_TH_VLXWU> th_vlxwu_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXB> th_vsxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXH> th_vsxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VLXW> th_vsxw_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXB> th_vsuxb_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXH> th_vsuxh_obj;
+static CONSTEXPR const th_loadstore_width<true, LST_INDEXED, UNSPEC_TH_VSUXW> th_vsuxw_obj;
+
/* Declare the function base NAME, pointing it to an instance
of class <NAME>_obj. */
#define BASE(NAME) \
@@ -2646,4 +2737,35 @@ BASE (vsuxseg)
BASE (vsoxseg)
BASE (vlsegff)
+BASE (th_vlb)
+BASE (th_vlh)
+BASE (th_vlw)
+BASE (th_vlbu)
+BASE (th_vlhu)
+BASE (th_vlwu)
+BASE (th_vsb)
+BASE (th_vsh)
+BASE (th_vsw)
+BASE (th_vlsb)
+BASE (th_vlsh)
+BASE (th_vlsw)
+BASE (th_vlsbu)
+BASE (th_vlshu)
+BASE (th_vlswu)
+BASE (th_vssb)
+BASE (th_vssh)
+BASE (th_vssw)
+BASE (th_vlxb)
+BASE (th_vlxh)
+BASE (th_vlxw)
+BASE (th_vlxbu)
+BASE (th_vlxhu)
+BASE (th_vlxwu)
+BASE (th_vsxb)
+BASE (th_vsxh)
+BASE (th_vsxw)
+BASE (th_vsuxb)
+BASE (th_vsuxh)
+BASE (th_vsuxw)
+
} // end namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins-bases.h b/gcc/config/riscv/riscv-vector-builtins-bases.h
index 131041ea66f..a062ff6dc95 100644
--- a/gcc/config/riscv/riscv-vector-builtins-bases.h
+++ b/gcc/config/riscv/riscv-vector-builtins-bases.h
@@ -48,6 +48,36 @@ extern const function_base *const vsoxei8;
extern const function_base *const vsoxei16;
extern const function_base *const vsoxei32;
extern const function_base *const vsoxei64;
+extern const function_base *const th_vlb;
+extern const function_base *const th_vlh;
+extern const function_base *const th_vlw;
+extern const function_base *const th_vlbu;
+extern const function_base *const th_vlhu;
+extern const function_base *const th_vlwu;
+extern const function_base *const th_vsb;
+extern const function_base *const th_vsh;
+extern const function_base *const th_vsw;
+extern const function_base *const th_vlsb;
+extern const function_base *const th_vlsh;
+extern const function_base *const th_vlsw;
+extern const function_base *const th_vlsbu;
+extern const function_base *const th_vlshu;
+extern const function_base *const th_vlswu;
+extern const function_base *const th_vssb;
+extern const function_base *const th_vssh;
+extern const function_base *const th_vssw;
+extern const function_base *const th_vlxb;
+extern const function_base *const th_vlxh;
+extern const function_base *const th_vlxw;
+extern const function_base *const th_vlxbu;
+extern const function_base *const th_vlxhu;
+extern const function_base *const th_vlxwu;
+extern const function_base *const th_vsxb;
+extern const function_base *const th_vsxh;
+extern const function_base *const th_vsxw;
+extern const function_base *const th_vsuxb;
+extern const function_base *const th_vsuxh;
+extern const function_base *const th_vsuxw;
extern const function_base *const vadd;
extern const function_base *const vsub;
extern const function_base *const vrsub;
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 1c37fd5fffe..3e7e134a924 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -651,4 +651,6 @@ DEF_RVV_FUNCTION (vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_p
DEF_RVV_FUNCTION (vsoxseg, seg_indexed_loadstore, none_m_preds, tuple_v_scalar_ptr_eew64_index_ops)
DEF_RVV_FUNCTION (vlsegff, seg_fault_load, full_preds, tuple_v_scalar_const_ptr_size_ptr_ops)
+#include "thead-vector-builtins-functions.def"
+
#undef DEF_RVV_FUNCTION
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index 4a754e0228f..e24c535e496 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -188,6 +188,104 @@ struct indexed_loadstore_def : public function_shape
}
};
+/* th_loadstore_width_def class. */
+struct th_loadstore_width_def : public build_base
+{
+ void build (function_builder &b,
+ const function_group_info &group) const override
+ {
+ /* Report an error if there is no xtheadvector. */
+ if (!TARGET_XTHEADVECTOR)
+ return;
+
+ build_all (b, group);
+ }
+
+ char *get_name (function_builder &b, const function_instance &instance,
+ bool overloaded_p) const override
+ {
+ /* Report an error if there is no xtheadvector. */
+ if (!TARGET_XTHEADVECTOR)
+ return nullptr;
+
+ /* Return nullptr if it can not be overloaded. */
+ if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+ return nullptr;
+
+ b.append_base_name (instance.base_name);
+
+ /* vop_v --> vop_v_<type>. */
+ if (!overloaded_p)
+ {
+ /* vop --> vop_v. */
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ /* vop_v --> vop_v_<type>. */
+ b.append_name (type_suffixes[instance.type.index].vector);
+ }
+
+ /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+ for vop_m C++ overloaded API. */
+ if (overloaded_p && instance.pred == PRED_TYPE_m)
+ return b.finish_name ();
+ b.append_name (predication_suffixes[instance.pred]);
+ return b.finish_name ();
+ }
+};
+
+
+/* th_indexed_loadstore_width_def class. */
+struct th_indexed_loadstore_width_def : public function_shape
+{
+ void build (function_builder &b,
+ const function_group_info &group) const override
+ {
+ /* Report an error if there is no xtheadvector. */
+ if (!TARGET_XTHEADVECTOR)
+ return;
+
+ for (unsigned int pred_idx = 0; group.preds[pred_idx] != NUM_PRED_TYPES;
+ ++pred_idx)
+ {
+ for (unsigned int vec_type_idx = 0;
+ group.ops_infos.types[vec_type_idx].index != NUM_VECTOR_TYPES;
+ ++vec_type_idx)
+ {
+ tree index_type = group.ops_infos.args[1].get_tree_type (
+ group.ops_infos.types[vec_type_idx].index);
+ if (!index_type)
+ continue;
+ build_one (b, group, pred_idx, vec_type_idx);
+ }
+ }
+ }
+
+ char *get_name (function_builder &b, const function_instance &instance,
+ bool overloaded_p) const override
+ {
+
+ /* Return nullptr if it can not be overloaded. */
+ if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred))
+ return nullptr;
+
+ b.append_base_name (instance.base_name);
+ /* vop_v --> vop_v_<type>. */
+ if (!overloaded_p)
+ {
+ /* vop --> vop_v. */
+ b.append_name (operand_suffixes[instance.op_info->op]);
+ /* vop_v --> vop_v_<type>. */
+ b.append_name (type_suffixes[instance.type.index].vector);
+ }
+
+ /* According to rvv-intrinsic-doc, it does not add "_m" suffix
+ for vop_m C++ overloaded API. */
+ if (overloaded_p && instance.pred == PRED_TYPE_m)
+ return b.finish_name ();
+ b.append_name (predication_suffixes[instance.pred]);
+ return b.finish_name ();
+ }
+};
+
/* alu_def class. */
struct alu_def : public build_base
{
@@ -988,6 +1086,8 @@ SHAPE(vsetvl, vsetvl)
SHAPE(vsetvl, vsetvlmax)
SHAPE(loadstore, loadstore)
SHAPE(indexed_loadstore, indexed_loadstore)
+SHAPE(th_loadstore_width, th_loadstore_width)
+SHAPE(th_indexed_loadstore_width, th_indexed_loadstore_width)
SHAPE(alu, alu)
SHAPE(alu_frm, alu_frm)
SHAPE(widen_alu, widen_alu)
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h
index df9884bb572..1d93895b87a 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.h
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h
@@ -28,6 +28,8 @@ extern const function_shape *const vsetvl;
extern const function_shape *const vsetvlmax;
extern const function_shape *const loadstore;
extern const function_shape *const indexed_loadstore;
+extern const function_shape *const th_loadstore_width;
+extern const function_shape *const th_indexed_loadstore_width;
extern const function_shape *const alu;
extern const function_shape *const alu_frm;
extern const function_shape *const widen_alu;
diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def
index 6aa45ae9a7e..74b1be6498c 100644
--- a/gcc/config/riscv/riscv-vector-builtins-types.def
+++ b/gcc/config/riscv/riscv-vector-builtins-types.def
@@ -24,12 +24,48 @@ along with GCC; see the file COPYING3. If not see
#define DEF_RVV_I_OPS(TYPE, REQUIRE)
#endif
+/* Use "DEF_RVV_I8_OPS" macro include all signed integer which will be
+ iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_I8_OPS
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I16_OPS" macro include all signed integer which will be
+ iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_I16_OPS
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_I32_OPS" macro include all signed integer which will be
+ iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_I32_OPS
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE)
+#endif
+
/* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be
iterated and registered as intrinsic functions. */
#ifndef DEF_RVV_U_OPS
#define DEF_RVV_U_OPS(TYPE, REQUIRE)
#endif
+/* Use "DEF_RVV_U8_OPS" macro include all unsigned integer which will be
+ iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_U8_OPS
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U16_OPS" macro include all unsigned integer which will be
+ iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_U16_OPS
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE)
+#endif
+
+/* Use "DEF_RVV_U32_OPS" macro include all unsigned integer which will be
+ iterated and registered as intrinsic functions. */
+#ifndef DEF_RVV_U32_OPS
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE)
+#endif
+
/* Use "DEF_RVV_F_OPS" macro include all floating-point which will be
iterated and registered as intrinsic functions. */
#ifndef DEF_RVV_F_OPS
@@ -362,6 +398,45 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint8m1_t, 0)
+DEF_RVV_I8_OPS (vint8m2_t, 0)
+DEF_RVV_I8_OPS (vint8m4_t, 0)
+DEF_RVV_I8_OPS (vint8m8_t, 0)
+DEF_RVV_I8_OPS (vint16m1_t, 0)
+DEF_RVV_I8_OPS (vint16m2_t, 0)
+DEF_RVV_I8_OPS (vint16m4_t, 0)
+DEF_RVV_I8_OPS (vint16m8_t, 0)
+DEF_RVV_I8_OPS (vint32m1_t, 0)
+DEF_RVV_I8_OPS (vint32m2_t, 0)
+DEF_RVV_I8_OPS (vint32m4_t, 0)
+DEF_RVV_I8_OPS (vint32m8_t, 0)
+DEF_RVV_I8_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I8_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I16_OPS (vint16m1_t, 0)
+DEF_RVV_I16_OPS (vint16m2_t, 0)
+DEF_RVV_I16_OPS (vint16m4_t, 0)
+DEF_RVV_I16_OPS (vint16m8_t, 0)
+DEF_RVV_I16_OPS (vint32m1_t, 0)
+DEF_RVV_I16_OPS (vint32m2_t, 0)
+DEF_RVV_I16_OPS (vint32m4_t, 0)
+DEF_RVV_I16_OPS (vint32m8_t, 0)
+DEF_RVV_I16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_I32_OPS (vint32m1_t, 0)
+DEF_RVV_I32_OPS (vint32m2_t, 0)
+DEF_RVV_I32_OPS (vint32m4_t, 0)
+DEF_RVV_I32_OPS (vint32m8_t, 0)
+DEF_RVV_I32_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_I32_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64)
+
DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64)
DEF_RVV_U_OPS (vuint8mf4_t, 0)
DEF_RVV_U_OPS (vuint8mf2_t, 0)
@@ -385,6 +460,45 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint8m1_t, 0)
+DEF_RVV_U8_OPS (vuint8m2_t, 0)
+DEF_RVV_U8_OPS (vuint8m4_t, 0)
+DEF_RVV_U8_OPS (vuint8m8_t, 0)
+DEF_RVV_U8_OPS (vuint16m1_t, 0)
+DEF_RVV_U8_OPS (vuint16m2_t, 0)
+DEF_RVV_U8_OPS (vuint16m4_t, 0)
+DEF_RVV_U8_OPS (vuint16m8_t, 0)
+DEF_RVV_U8_OPS (vuint32m1_t, 0)
+DEF_RVV_U8_OPS (vuint32m2_t, 0)
+DEF_RVV_U8_OPS (vuint32m4_t, 0)
+DEF_RVV_U8_OPS (vuint32m8_t, 0)
+DEF_RVV_U8_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U8_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U16_OPS (vuint16m1_t, 0)
+DEF_RVV_U16_OPS (vuint16m2_t, 0)
+DEF_RVV_U16_OPS (vuint16m4_t, 0)
+DEF_RVV_U16_OPS (vuint16m8_t, 0)
+DEF_RVV_U16_OPS (vuint32m1_t, 0)
+DEF_RVV_U16_OPS (vuint32m2_t, 0)
+DEF_RVV_U16_OPS (vuint32m4_t, 0)
+DEF_RVV_U16_OPS (vuint32m8_t, 0)
+DEF_RVV_U16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
+DEF_RVV_U32_OPS (vuint32m1_t, 0)
+DEF_RVV_U32_OPS (vuint32m2_t, 0)
+DEF_RVV_U32_OPS (vuint32m4_t, 0)
+DEF_RVV_U32_OPS (vuint32m8_t, 0)
+DEF_RVV_U32_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64)
+DEF_RVV_U32_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64)
+
DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64)
DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16)
DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16)
@@ -1356,7 +1470,13 @@ DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64)
DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64)
#undef DEF_RVV_I_OPS
+#undef DEF_RVV_I8_OPS
+#undef DEF_RVV_I16_OPS
+#undef DEF_RVV_I32_OPS
#undef DEF_RVV_U_OPS
+#undef DEF_RVV_U8_OPS
+#undef DEF_RVV_U16_OPS
+#undef DEF_RVV_U32_OPS
#undef DEF_RVV_F_OPS
#undef DEF_RVV_B_OPS
#undef DEF_RVV_WEXTI_OPS
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 6330a3a41c3..c2f1f6d1a9b 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -246,6 +246,63 @@ static const rvv_type_info iu_ops[] = {
#include "riscv-vector-builtins-types.def"
{NUM_VECTOR_TYPES, 0}};
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info i8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info i16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info i32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info u8_ops[] = {
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info u16_ops[] = {
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info u32_ops[] = {
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info iu8_ops[] = {
+#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info iu16_ops[] = {
+#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
+/* A list of all integer will be registered for intrinsic functions. */
+static const rvv_type_info iu32_ops[] = {
+#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
+#include "riscv-vector-builtins-types.def"
+ {NUM_VECTOR_TYPES, 0}};
+
/* A list of all types will be registered for intrinsic functions. */
static const rvv_type_info all_ops[] = {
#define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE},
@@ -913,7 +970,32 @@ static CONSTEXPR const rvv_arg_type_info tuple_vcreate_args[]
/* A list of args for vector_type func (vector_type) function. */
static CONSTEXPR const rvv_arg_type_info ext_vcreate_args[]
- = {rvv_arg_type_info (RVV_BASE_vector),
+ = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, size_t)
+ * function. */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_size_args[]
+ = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+ rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end};
+
+/* A list of args for vector_type func (const scalar_type *, eew8_index_type)
+ * function. */
+static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_index_args[]
+ = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr),
+ rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, eew8_index_type, vector_type)
+ * function. */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_index_args[]
+ = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+ rvv_arg_type_info (RVV_BASE_unsigned_vector),
+ rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end};
+
+/* A list of args for void func (scalar_type *, size_t, vector_type)
+ * function. */
+static CONSTEXPR const rvv_arg_type_info scalar_ptr_size_args[]
+ = {rvv_arg_type_info (RVV_BASE_scalar_ptr),
+ rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector),
rvv_arg_type_info_end};
/* A list of none preds that will be registered for intrinsic functions. */
@@ -2604,6 +2686,222 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops
rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */
ext_vcreate_args /* Args */};
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_ops
+ = {i8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_ops
+ = {i16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_ops
+ = {i32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_ops
+ = {u8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_ops
+ = {u16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *)
+ * function registration. */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_ops
+ = {u32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration. */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_size_ops
+ = {i8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration. */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_size_ops
+ = {i16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration. */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_size_ops
+ = {i32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration. */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_size_ops
+ = {u8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration. */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_size_ops
+ = {u16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * size_t) function registration. */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_size_ops
+ = {u32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_size_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_index_ops
+ = {i8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_index_ops
+ = {u8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_index_ops
+ = {i16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_index_ops
+ = {u16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_index_ops
+ = {i32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for vector_type func (const scalar_type *,
+ * eew8_index_type) function registration. */
+static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_index_ops
+ = {u32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_vector), /* Return type */
+ scalar_const_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew8_index_type,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_index_ops
+ = {iu8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew16_index_type,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_index_ops
+ = {iu16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, eew32_index_type,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_index_ops
+ = {iu32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_index_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type,
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_ops
+ = {iu8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_ops
+ = {iu16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, vector_type)
+ * function registration. */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_ops
+ = {iu32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_size_ops
+ = {iu8_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_size_ops
+ = {iu16_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_size_args /* Args */};
+
+/* A static operand information for void func (scalar_type *, size_t,
+ * vector_type) function registration. */
+static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_size_ops
+ = {iu32_ops, /* Types */
+ OP_TYPE_v, /* Suffix */
+ rvv_arg_type_info (RVV_BASE_void), /* Return type */
+ scalar_ptr_size_args /* Args */};
+
/* A list of all RVV base function types. */
static CONSTEXPR const function_type_info function_types[] = {
#define DEF_RVV_TYPE_INDEX( \
diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def
new file mode 100644
index 00000000000..2885e7a475c
--- /dev/null
+++ b/gcc/config/riscv/thead-vector-builtins-functions.def
@@ -0,0 +1,30 @@
+DEF_RVV_FUNCTION (th_vlb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlhu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vlwu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_ops)
+DEF_RVV_FUNCTION (th_vsb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vsw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_ops)
+DEF_RVV_FUNCTION (th_vlsb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlsbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlshu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlswu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vssw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_size_ops)
+DEF_RVV_FUNCTION (th_vlxb, th_indexed_loadstore_width, full_preds, i8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxh, th_indexed_loadstore_width, full_preds, i16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxw, th_indexed_loadstore_width, full_preds, i32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxbu, th_indexed_loadstore_width, full_preds, u8_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxhu, th_indexed_loadstore_width, full_preds, u16_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vlxwu, th_indexed_loadstore_width, full_preds, u32_v_scalar_const_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops)
+DEF_RVV_FUNCTION (th_vsuxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops)
\ No newline at end of file
diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md
new file mode 100644
index 00000000000..d1e9f305922
--- /dev/null
+++ b/gcc/config/riscv/thead-vector.md
@@ -0,0 +1,235 @@
+(define_c_enum "unspec" [
+ UNSPEC_TH_VLB
+ UNSPEC_TH_VLBU
+ UNSPEC_TH_VLH
+ UNSPEC_TH_VLHU
+ UNSPEC_TH_VLW
+ UNSPEC_TH_VLWU
+
+ UNSPEC_TH_VLSB
+ UNSPEC_TH_VLSBU
+ UNSPEC_TH_VLSH
+ UNSPEC_TH_VLSHU
+ UNSPEC_TH_VLSW
+ UNSPEC_TH_VLSWU
+
+ UNSPEC_TH_VLXB
+ UNSPEC_TH_VLXBU
+ UNSPEC_TH_VLXH
+ UNSPEC_TH_VLXHU
+ UNSPEC_TH_VLXW
+ UNSPEC_TH_VLXWU
+
+ UNSPEC_TH_VSUXB
+ UNSPEC_TH_VSUXH
+ UNSPEC_TH_VSUXW
+])
+
+(define_int_iterator UNSPEC_TH_VLMEM_OP [
+ UNSPEC_TH_VLB UNSPEC_TH_VLBU
+ UNSPEC_TH_VLH UNSPEC_TH_VLHU
+ UNSPEC_TH_VLW UNSPEC_TH_VLWU
+])
+
+(define_int_iterator UNSPEC_TH_VLSMEM_OP [
+ UNSPEC_TH_VLSB UNSPEC_TH_VLSBU
+ UNSPEC_TH_VLSH UNSPEC_TH_VLSHU
+ UNSPEC_TH_VLSW UNSPEC_TH_VLSWU
+])
+
+(define_int_iterator UNSPEC_TH_VLXMEM_OP [
+ UNSPEC_TH_VLXB UNSPEC_TH_VLXBU
+ UNSPEC_TH_VLXH UNSPEC_TH_VLXHU
+ UNSPEC_TH_VLXW UNSPEC_TH_VLXWU
+])
+
+(define_int_attr vlmem_op_attr [
+ (UNSPEC_TH_VLB "b") (UNSPEC_TH_VLBU "bu")
+ (UNSPEC_TH_VLH "h") (UNSPEC_TH_VLHU "hu")
+ (UNSPEC_TH_VLW "w") (UNSPEC_TH_VLWU "wu")
+ (UNSPEC_TH_VLSB "b") (UNSPEC_TH_VLSBU "bu")
+ (UNSPEC_TH_VLSH "h") (UNSPEC_TH_VLSHU "hu")
+ (UNSPEC_TH_VLSW "w") (UNSPEC_TH_VLSWU "wu")
+ (UNSPEC_TH_VLXB "b") (UNSPEC_TH_VLXBU "bu")
+ (UNSPEC_TH_VLXH "h") (UNSPEC_TH_VLXHU "hu")
+ (UNSPEC_TH_VLXW "w") (UNSPEC_TH_VLXWU "wu")
+ (UNSPEC_TH_VSUXB "b")
+ (UNSPEC_TH_VSUXH "h")
+ (UNSPEC_TH_VSUXW "w")
+])
+
+(define_int_attr vlmem_order_attr [
+ (UNSPEC_TH_VLXB "")
+ (UNSPEC_TH_VLXH "")
+ (UNSPEC_TH_VLXW "")
+ (UNSPEC_TH_VSUXB "u")
+ (UNSPEC_TH_VSUXH "u")
+ (UNSPEC_TH_VSUXW "u")
+])
+
+(define_int_iterator UNSPEC_TH_VSMEM_OP [
+ UNSPEC_TH_VLB
+ UNSPEC_TH_VLH
+ UNSPEC_TH_VLW
+])
+
+(define_int_iterator UNSPEC_TH_VSSMEM_OP [
+ UNSPEC_TH_VLSB
+ UNSPEC_TH_VLSH
+ UNSPEC_TH_VLSW
+])
+
+(define_int_iterator UNSPEC_TH_VSXMEM_OP [
+ UNSPEC_TH_VLXB
+ UNSPEC_TH_VLXH
+ UNSPEC_TH_VLXW
+ UNSPEC_TH_VSUXB
+ UNSPEC_TH_VSUXH
+ UNSPEC_TH_VSUXW
+])
+
+;; Vector Unit-Stride Instructions
+(define_expand "@pred_mov_width<vlmem_op_attr><mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand")
+ (match_operand 4 "vector_length_operand")
+ (match_operand 5 "const_int_operand")
+ (match_operand 6 "const_int_operand")
+ (match_operand 7 "const_int_operand")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+ (match_operand:V_VLS 3 "vector_move_operand")
+ (match_operand:V_VLS 2 "vector_merge_operand")))]
+ "TARGET_XTHEADVECTOR"
+ {})
+
+(define_insn_and_split "*pred_mov_width<vlmem_op_attr><mode>"
+ [(set (match_operand:V_VLS 0 "nonimmediate_operand" "=vr, vr, vd, m, vr, vr")
+ (if_then_else:V_VLS
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, Wc1")
+ (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK")
+ (match_operand 5 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 6 "const_int_operand" " i, i, i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP)
+ (match_operand:V_VLS 3 "reg_or_mem_operand" " m, m, m, vr, vr, vr")
+ (match_operand:V_VLS 2 "vector_merge_operand" " 0, vu, vu, vu, vu, 0")))]
+ "(TARGET_XTHEADVECTOR
+ && (register_operand (operands[0], <MODE>mode)
+ || register_operand (operands[3], <MODE>mode)))"
+ "@
+ th.vl<vlmem_op_attr>.v\t%0,%3%p1
+ th.vl<vlmem_op_attr>.v\t%0,%3
+ th.vl<vlmem_op_attr>.v\t%0,%3,%1.t
+ th.vs<vlmem_op_attr>.v\t%3,%0%p1
+ th.vmv.v.v\t%0,%3
+ th.vmv.v.v\t%0,%3"
+ "&& register_operand (operands[0], <MODE>mode)
+ && register_operand (operands[3], <MODE>mode)
+ && satisfies_constraint_vu (operands[2])
+ && INTVAL (operands[7]) == riscv_vector::VLMAX"
+ [(set (match_dup 0) (match_dup 3))]
+ ""
+ [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_store_width<vlmem_op_attr><mode>"
+ [(set (match_operand:VI 0 "memory_operand" "+m")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSMEM_OP)
+ (match_operand:VI 2 "register_operand" " vr")
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "th.vs<vlmem_op_attr>.v\t%2,%0%p1"
+ [(set_attr "type" "vste")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 4))
+ (set_attr "vl_op_idx" "3")])
+
+;; Vector Strided Instructions
+(define_insn "@pred_strided_load_width<vlmem_op_attr><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vr, vr, vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, Wc1, vm")
+ (match_operand 5 "vector_length_operand" " rK, rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLSMEM_OP)
+ (unspec:VI
+ [(match_operand:VI 3 "memory_operand" " m, m, m")
+ (match_operand 4 "pmode_reg_or_0_operand" " rJ, rJ, rJ")] UNSPEC_TH_VLSMEM_OP)
+ (match_operand:VI 2 "vector_merge_operand" " 0, vu, vu")))]
+ "TARGET_XTHEADVECTOR"
+ "th.vls<vlmem_op_attr>.v\t%0,%3,%z4%p1"
+ [(set_attr "type" "vlds")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_strided_store_width<vlmem_op_attr><mode>"
+ [(set (match_operand:VI 0 "memory_operand" "+m")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSSMEM_OP)
+ (unspec:VI
+ [(match_operand 2 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VI 3 "register_operand" " vr")] UNSPEC_TH_VSSMEM_OP)
+ (match_dup 0)))]
+ "TARGET_XTHEADVECTOR"
+ "th.vss<vlmem_op_attr>.v\t%3,%0,%z2%p1"
+ [(set_attr "type" "vsts")
+ (set_attr "mode" "<MODE>")
+ (set (attr "avl_type_idx") (const_int 5))])
+
+;; Vector Indexed Instructions
+(define_insn "@pred_indexed_load_width<vlmem_op_attr><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vr,vd, vr")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" " vm,Wc1,vm,Wc1")
+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK")
+ (match_operand 6 "const_int_operand" " i, i, i, i")
+ (match_operand 7 "const_int_operand" " i, i, i, i")
+ (match_operand 8 "const_int_operand" " i, i, i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLXMEM_OP)
+ (unspec:VI
+ [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ,rJ, rJ")
+ (mem:BLK (scratch))
+ (match_operand:VI 4 "register_operand" " vr, vr,vr, vr")] UNSPEC_TH_VLXMEM_OP)
+ (match_operand:VI 2 "vector_merge_operand" " vu, vu, 0, 0")))]
+ "TARGET_XTHEADVECTOR"
+ "th.vlx<vlmem_op_attr>.v\t%0,(%z3),%4%p1"
+ [(set_attr "type" "vldux")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_indexed_<vlmem_order_attr>store_width<vlmem_op_attr><mode>"
+ [(set (mem:BLK (scratch))
+ (unspec:BLK
+ [(unspec:<VM>
+ [(match_operand:<VM> 0 "vector_mask_operand" "vmWc1")
+ (match_operand 4 "vector_length_operand" " rK")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSXMEM_OP)
+ (match_operand 1 "pmode_reg_or_0_operand" " rJ")
+ (match_operand:VI 2 "register_operand" " vr")
+ (match_operand:VI 3 "register_operand" " vr")] UNSPEC_TH_VSXMEM_OP))]
+ "TARGET_XTHEADVECTOR"
+ "th.vs<vlmem_order_attr>x<vlmem_op_attr>.v\t%3,(%z1),%2%p0"
+ [(set_attr "type" "vstux")
+ (set_attr "mode" "<MODE>")])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 2af237854f9..a920264f35b 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -8660,5 +8660,6 @@ (define_insn "@pred_indexed_<order>store<V32T:mode><RATIO2I:mode>"
[(set_attr "type" "vssegt<order>x")
(set_attr "mode" "<V32T:MODE>")])
+(include "thead-vector.md")
(include "autovec.md")
(include "autovec-opt.md")
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
new file mode 100644
index 00000000000..740cbee1c95
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out)
+{
+ vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+ vint32m1_t v4 = __riscv_vadd_vv_i32m1_tu (v3, v2, v2, 4);
+ __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+** th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlb_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+ vint32m1_t v4 = __riscv_vadd_vv_i32m1_m (mask, v3, v3, 4);
+ __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlb\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlb.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vv\tv[0-9]+,\s*v[0-9]+,\s*v[0-9]+
+** th.vadd\.vv\tv[1-9][0-9]?,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t
+** th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_th_vlb_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlb_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vv_i32m1 (v2, v2, 4);
+ vint32m1_t v4 = __riscv_vadd_vv_i32m1_tumu (mask, v3, v2, v2, 4);
+ __riscv_th_vsb_v_i32m1 (out, v4, 4);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
new file mode 100644
index 00000000000..ec34fee577f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vsb\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+ __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+ __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlbu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlbu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsb.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_th_vlbu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlbu_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_th_vsb_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
new file mode 100644
index 00000000000..ac242af3462
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, -16, 4);
+ __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlh_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, -16, 4);
+ __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlh\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlh.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_th_vlh_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlh_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, -16, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_th_vsh_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
new file mode 100644
index 00000000000..211b120fdd5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vsh\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+ __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+ __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlhu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlhu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsh.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_th_vlhu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlhu_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_th_vsh_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
new file mode 100644
index 00000000000..d192a3b2eae
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, int32_t x)
+{
+ vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tu (v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tu (v3, v2, x, 4);
+ __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlw_v_i32m1_m (mask, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_m (mask, v3, x, 4);
+ __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlw\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlw.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vx\tv[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+
+** th.vadd\.vx\tv[1-9][0-9]?,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t
+** th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, int32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vint32m1_t v = __riscv_th_vlw_v_i32m1 (in, 4);
+ vint32m1_t v2 = __riscv_th_vlw_v_i32m1_tumu (mask, v, in, 4);
+ vint32m1_t v3 = __riscv_vadd_vx_i32m1 (v2, x, 4);
+ vint32m1_t v4 = __riscv_vadd_vx_i32m1_tumu (mask, v3, v2, x, 4);
+ __riscv_th_vsw_v_i32m1 (out, v4, 4);
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
new file mode 100644
index 00000000000..28ee044c1e1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c
@@ -0,0 +1,68 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gcxtheadvector -mabi=ilp32d -O3" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include "riscv_vector.h"
+
+/*
+** f1:
+** ...
+** th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vsw\.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f1 (void * in, void *out, uint32_t x)
+{
+ vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tu (v, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tu (v3, v2, -16, 4);
+ __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f2:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f2 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_m (mask, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_m (mask, v3, -16, 4);
+ __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
+
+/*
+** f3:
+** ...
+** th.vlm.v\tv[0-9]+,0\([a-x0-9]+\)
+** ...
+** th.vlwu\.v\tv[0-9]+,0\([a-x0-9]+\)
+** th.vlwu.v\tv[0-9]+,0\([a-x0-9]+\),v0.t
+** th.vadd\.vi\tv[0-9]+,\s*v[0-9]+,\s*-16
+** th.vadd\.vi\tv[1-9][0-9]?,\s*v[0-9]+,\s*-16,\s*v0.t
+** th.vsw.v\tv[0-9]+,0\([a-x0-9]+\)
+** ret
+*/
+void f3 (void * in, void *out, uint32_t x)
+{
+ vbool32_t mask = *(vbool32_t*)in;
+ asm volatile ("":::"memory");
+ vuint32m1_t v = __riscv_th_vlwu_v_u32m1 (in, 4);
+ vuint32m1_t v2 = __riscv_th_vlwu_v_u32m1_tumu (mask, v, in, 4);
+ vuint32m1_t v3 = __riscv_vadd_vx_u32m1 (v2, -16, 4);
+ vuint32m1_t v4 = __riscv_vadd_vx_u32m1_tumu (mask, v3, v2, -16, 4);
+ __riscv_th_vsw_v_u32m1 (out, v4, 4);
+}
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
` (7 preceding siblings ...)
2023-11-17 9:02 ` [PATCH 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
@ 2023-11-17 9:03 ` Jun Sha (Joshua)
8 siblings, 0 replies; 10+ messages in thread
From: Jun Sha (Joshua) @ 2023-11-17 9:03 UTC (permalink / raw)
To: gcc-patches
Cc: jim.wilson.gcc, palmer, andrew, philipp.tomsich, jeffreyalaw,
christoph.muellner, Jun Sha (Joshua)
Because the XTheadVector extension does not support fractional
operations, so we need to delete the related intrinsics.
The types involved are as follows:
v(u)int8mf8_t,
v(u)int8mf4_t,
v(u)int8mf2_t,
v(u)int16mf4_t,
v(u)int16mf2_t,
v(u)int32mf2_t,
vfloat16mf4_t,
vfloat16mf2_t,
vfloat32mf2_t
gcc/ChangeLog:
* config/riscv/riscv-protos.h (riscv_v_ext_mode_p):
New extern.
* config/riscv/riscv-vector-builtins-shapes.cc (check_type):
New function.
(build_one): If the checked types fail, no function is generated.
* config/riscv/riscv-vector-switch.def (ENTRY):
Disable fractional mode for the XTheadVector extension.
(TUPLE_ENTRY): Likewise.
* config/riscv/riscv.cc (riscv_v_ext_vls_mode_p): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/fractional-type.c: New test.
---
gcc/config/riscv/riscv-protos.h | 1 +
.../riscv/riscv-vector-builtins-shapes.cc | 22 +++
gcc/config/riscv/riscv-vector-switch.def | 144 +++++++++---------
gcc/config/riscv/riscv.cc | 2 +-
.../gcc.target/riscv/rvv/fractional-type.c | 79 ++++++++++
5 files changed, 175 insertions(+), 73 deletions(-)
create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 8cdfadbcf10..7de4f81aa9a 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -153,6 +153,7 @@ extern poly_uint64 riscv_regmode_natural_size (machine_mode);
extern bool riscv_v_ext_vector_mode_p (machine_mode);
extern bool riscv_v_ext_tuple_mode_p (machine_mode);
extern bool riscv_v_ext_vls_mode_p (machine_mode);
+extern bool riscv_v_ext_mode_p (machine_mode);
extern int riscv_get_v_regno_alignment (machine_mode);
extern bool riscv_shamt_matches_mask_p (int, HOST_WIDE_INT);
extern void riscv_subword_address (rtx, rtx *, rtx *, rtx *, rtx *);
diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
index e24c535e496..dcdb9506ff2 100644
--- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc
@@ -33,6 +33,24 @@
namespace riscv_vector {
+/* Check whether the RET and ARGS are valid for the function. */
+
+static bool
+check_type (tree ret, vec<tree> &args)
+{
+ tree arg;
+ unsigned i;
+
+ if (!ret || (builtin_type_p (ret) && !riscv_v_ext_mode_p (TYPE_MODE (ret))))
+ return false;
+
+ FOR_EACH_VEC_ELT (args, i, arg)
+ if (!arg || (builtin_type_p (arg) && !riscv_v_ext_mode_p (TYPE_MODE (arg))))
+ return false;
+
+ return true;
+}
+
/* Add one function instance for GROUP, using operand suffix at index OI,
mode suffix at index PAIR && bi and predication suffix at index pred_idx. */
static void
@@ -49,6 +67,10 @@ build_one (function_builder &b, const function_group_info &group,
group.ops_infos.types[vec_type_idx].index);
b.allocate_argument_types (function_instance, argument_types);
b.apply_predication (function_instance, return_type, argument_types);
+
+ if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types))
+ return;
+
b.add_overloaded_function (function_instance, *group.shape);
b.add_unique_function (function_instance, (*group.shape), return_type,
argument_types);
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 5c9f9bcbc3e..f17f87f89c9 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1)
ENTRY (RVVM4QI, true, LMUL_4, 2)
ENTRY (RVVM2QI, true, LMUL_2, 4)
ENTRY (RVVM1QI, true, LMUL_1, 8)
-ENTRY (RVVMF2QI, true, LMUL_F2, 16)
-ENTRY (RVVMF4QI, true, LMUL_F4, 32)
-ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64)
+ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16)
+ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32)
+ENTRY (RVVMF8QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F8, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8HI, true, LMUL_8, 2)
ENTRY (RVVM4HI, true, LMUL_4, 4)
ENTRY (RVVM2HI, true, LMUL_2, 8)
ENTRY (RVVM1HI, true, LMUL_1, 16)
-ENTRY (RVVMF2HI, true, LMUL_F2, 32)
-ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */
ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2)
ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4)
ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8)
ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16)
-ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32)
-ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64)
+ENTRY (RVVMF2HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, LMUL_F2, 32)
+ENTRY (RVVMF4HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F4, 64)
/* Disable modes if TARGET_MIN_VLEN == 32. */
ENTRY (RVVM8SI, true, LMUL_8, 4)
ENTRY (RVVM4SI, true, LMUL_4, 8)
ENTRY (RVVM2SI, true, LMUL_2, 16)
ENTRY (RVVM1SI, true, LMUL_1, 32)
-ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */
ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4)
ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8)
ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16)
ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32)
-ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64)
+ENTRY (RVVMF2SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, LMUL_F2, 64)
/* Disable modes if !TARGET_VECTOR_ELEN_64. */
ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8)
@@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64)
#endif
TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x8QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x7QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x6QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x5QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x4QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64)
TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x3QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64)
TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2)
TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4)
TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8)
-TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16)
-TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32)
-TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64)
+TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16)
+TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32)
+TUPLE_ENTRY (RVVMF8x2QI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64)
TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x8HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x8HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x7HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x7HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x6HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x6HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x5HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x5HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x4HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x4HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64)
TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x3HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x3HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64)
TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32)
-TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64)
+TUPLE_ENTRY (RVVMF2x2HF, (TARGET_VECTOR_ELEN_FP_16) && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF4x2HF, (TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64)
TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x8SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x7SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x6SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x5SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x4SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32)
TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x3SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32)
TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4)
TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8)
TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16)
-TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32)
+TUPLE_ENTRY (RVVMF2x2SF, (TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32)
TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16)
TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 754107cdaac..059b82c01ef 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1263,7 +1263,7 @@ riscv_v_ext_vls_mode_p (machine_mode mode)
/* Return true if it is either RVV vector mode or RVV tuple mode. */
-static bool
+bool
riscv_v_ext_mode_p (machine_mode mode)
{
return riscv_v_ext_vector_mode_p (mode) || riscv_v_ext_tuple_mode_p (mode)
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c b/gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
new file mode 100644
index 00000000000..c0e5c5ef4db
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/fractional-type.c
@@ -0,0 +1,79 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv32gc_zvfh_xtheadvector -mabi=ilp32d -O3" } */
+
+#include "riscv_vector.h"
+
+void invalid_type ()
+{
+ vint8mf8_t v1; /* { dg-error {unknown type name 'vint8mf8_t'} } */
+ vint8mf4_t v2; /* { dg-error {unknown type name 'vint8mf4_t'} } */
+ vint8mf2_t v3; /* { dg-error {unknown type name 'vint8mf2_t'} } */
+ vint16mf4_t v4; /* { dg-error {unknown type name 'vint16mf4_t'} } */
+ vint16mf2_t v5; /* { dg-error {unknown type name 'vint16mf2_t'} } */
+ vint32mf2_t v6; /* { dg-error {unknown type name 'vint32mf2_t'} } */
+ vuint8mf8_t v7; /* { dg-error {unknown type name 'vuint8mf8_t'} } */
+ vuint8mf4_t v8; /* { dg-error {unknown type name 'vuint8mf4_t'} } */
+ vuint8mf2_t v9; /* { dg-error {unknown type name 'vuint8mf2_t'} } */
+ vuint16mf4_t v10; /* { dg-error {unknown type name 'vuint16mf4_t'} } */
+ vuint16mf2_t v11; /* { dg-error {unknown type name 'vuint16mf2_t'} } */
+ vuint32mf2_t v12; /* { dg-error {unknown type name 'vuint32mf2_t'} } */
+ vfloat16mf4_t v13; /* { dg-error {unknown type name 'vfloat16mf4_t'} } */
+ vfloat16mf2_t v14; /* { dg-error {unknown type name 'vfloat16mf2_t'} } */
+ vfloat32mf2_t v15; /* { dg-error {unknown type name 'vfloat32mf2_t'} } */
+}
+
+void valid_type ()
+{
+ vint8m1_t v1;
+ vint8m2_t v2;
+ vint8m4_t v3;
+ vint8m8_t v4;
+ vint16m1_t v5;
+ vint16m2_t v6;
+ vint16m4_t v7;
+ vint16m8_t v8;
+ vint32m1_t v9;
+ vint32m2_t v10;
+ vint32m4_t v11;
+ vint32m8_t v12;
+ vint64m1_t v13;
+ vint64m2_t v14;
+ vint64m4_t v15;
+ vint64m8_t v16;
+ vuint8m1_t v17;
+ vuint8m2_t v18;
+ vuint8m4_t v19;
+ vuint8m8_t v20;
+ vuint16m1_t v21;
+ vuint16m2_t v22;
+ vuint16m4_t v23;
+ vuint16m8_t v24;
+ vuint32m1_t v25;
+ vuint32m2_t v26;
+ vuint32m4_t v27;
+ vuint32m8_t v28;
+ vuint64m1_t v29;
+ vuint64m2_t v30;
+ vuint64m4_t v31;
+ vuint64m8_t v32;
+ vfloat16m1_t v33;
+ vfloat16m2_t v34;
+ vfloat16m4_t v35;
+ vfloat16m8_t v36;
+ vfloat32m1_t v37;
+ vfloat32m2_t v38;
+ vfloat32m4_t v39;
+ vfloat32m8_t v40;
+ vfloat64m1_t v41;
+ vfloat64m2_t v42;
+ vfloat64m4_t v43;
+ vfloat64m8_t v44;
+
+ vbool1_t v45;
+ vbool2_t v46;
+ vbool4_t v47;
+ vbool8_t v48;
+ vbool16_t v49;
+ vbool32_t v50;
+ vbool64_t v51;
+}
\ No newline at end of file
--
2.17.1
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-11-17 9:04 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-17 8:19 [PATCH 0/9] RISC-V: Support XTheadVector extensions Jun Sha (Joshua)
2023-11-17 8:52 ` [PATCH 1/9] RISC-V: minimal support for xtheadvector Jun Sha (Joshua)
2023-11-17 8:55 ` [PATCH 2/9] RISC-V: Handle differences between xtheadvector and vector Jun Sha (Joshua)
2023-11-17 8:56 ` [PATCH 3/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part1) Jun Sha (Joshua)
2023-11-17 8:58 ` [PATCH 4/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part2) Jun Sha (Joshua)
2023-11-17 8:59 ` [PATCH 5/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part3) Jun Sha (Joshua)
2023-11-17 9:00 ` [PATCH 6/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part4) Jun Sha (Joshua)
2023-11-17 9:01 ` [PATCH 7/9] RISC-V: Tests for overlapping RVV and XTheadVector instructions (Part5) Jun Sha (Joshua)
2023-11-17 9:02 ` [PATCH 8/9] RISC-V: Add support for xtheadvector-specific load/store intrinsics Jun Sha (Joshua)
2023-11-17 9:03 ` [PATCH 9/9] RISC-V: Disable fractional type intrinsics for the XTheadVector extension Jun Sha (Joshua)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).