public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] RISC-V: Add tuple types support
@ 2023-04-18 12:09 juzhe.zhong
  2023-05-03 10:40 ` Kito Cheng
  0 siblings, 1 reply; 3+ messages in thread
From: juzhe.zhong @ 2023-04-18 12:09 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, Juzhe-Zhong

From: Juzhe-Zhong <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-modes.def (RVV_TUPLE_MODES): New macro.
        (RVV_TUPLE_PARTIAL_MODES): Ditto.
        * config/riscv/riscv-protos.h (riscv_v_ext_tuple_mode_p): New function.
        (get_nf): Ditto.
        (get_subpart_mode): Ditto.
        (get_tuple_mode): Ditto.
        (expand_tuple_move): Ditto.
        * config/riscv/riscv-v.cc (ENTRY): New macro.
        (TUPLE_ENTRY): Ditto.
        (get_nf): New function.
        (get_subpart_mode): Ditto.
        (get_tuple_mode): Ditto.
        (expand_tuple_move): Ditto.
        * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TUPLE_TYPE): New macro.
        (register_tuple_type): New function
        * config/riscv/riscv-vector-builtins.def (DEF_RVV_TUPLE_TYPE): New macro.
        (vint8mf8x2_t): New macro.
        (vuint8mf8x2_t): Ditto.
        (vint8mf8x3_t): Ditto.
        (vuint8mf8x3_t): Ditto.
        (vint8mf8x4_t): Ditto.
        (vuint8mf8x4_t): Ditto.
        (vint8mf8x5_t): Ditto.
        (vuint8mf8x5_t): Ditto.
        (vint8mf8x6_t): Ditto.
        (vuint8mf8x6_t): Ditto.
        (vint8mf8x7_t): Ditto.
        (vuint8mf8x7_t): Ditto.
        (vint8mf8x8_t): Ditto.
        (vuint8mf8x8_t): Ditto.
        (vint8mf4x2_t): Ditto.
        (vuint8mf4x2_t): Ditto.
        (vint8mf4x3_t): Ditto.
        (vuint8mf4x3_t): Ditto.
        (vint8mf4x4_t): Ditto.
        (vuint8mf4x4_t): Ditto.
        (vint8mf4x5_t): Ditto.
        (vuint8mf4x5_t): Ditto.
        (vint8mf4x6_t): Ditto.
        (vuint8mf4x6_t): Ditto.
        (vint8mf4x7_t): Ditto.
        (vuint8mf4x7_t): Ditto.
        (vint8mf4x8_t): Ditto.
        (vuint8mf4x8_t): Ditto.
        (vint8mf2x2_t): Ditto.
        (vuint8mf2x2_t): Ditto.
        (vint8mf2x3_t): Ditto.
        (vuint8mf2x3_t): Ditto.
        (vint8mf2x4_t): Ditto.
        (vuint8mf2x4_t): Ditto.
        (vint8mf2x5_t): Ditto.
        (vuint8mf2x5_t): Ditto.
        (vint8mf2x6_t): Ditto.
        (vuint8mf2x6_t): Ditto.
        (vint8mf2x7_t): Ditto.
        (vuint8mf2x7_t): Ditto.
        (vint8mf2x8_t): Ditto.
        (vuint8mf2x8_t): Ditto.
        (vint8m1x2_t): Ditto.
        (vuint8m1x2_t): Ditto.
        (vint8m1x3_t): Ditto.
        (vuint8m1x3_t): Ditto.
        (vint8m1x4_t): Ditto.
        (vuint8m1x4_t): Ditto.
        (vint8m1x5_t): Ditto.
        (vuint8m1x5_t): Ditto.
        (vint8m1x6_t): Ditto.
        (vuint8m1x6_t): Ditto.
        (vint8m1x7_t): Ditto.
        (vuint8m1x7_t): Ditto.
        (vint8m1x8_t): Ditto.
        (vuint8m1x8_t): Ditto.
        (vint8m2x2_t): Ditto.
        (vuint8m2x2_t): Ditto.
        (vint8m2x3_t): Ditto.
        (vuint8m2x3_t): Ditto.
        (vint8m2x4_t): Ditto.
        (vuint8m2x4_t): Ditto.
        (vint8m4x2_t): Ditto.
        (vuint8m4x2_t): Ditto.
        (vint16mf4x2_t): Ditto.
        (vuint16mf4x2_t): Ditto.
        (vint16mf4x3_t): Ditto.
        (vuint16mf4x3_t): Ditto.
        (vint16mf4x4_t): Ditto.
        (vuint16mf4x4_t): Ditto.
        (vint16mf4x5_t): Ditto.
        (vuint16mf4x5_t): Ditto.
        (vint16mf4x6_t): Ditto.
        (vuint16mf4x6_t): Ditto.
        (vint16mf4x7_t): Ditto.
        (vuint16mf4x7_t): Ditto.
        (vint16mf4x8_t): Ditto.
        (vuint16mf4x8_t): Ditto.
        (vint16mf2x2_t): Ditto.
        (vuint16mf2x2_t): Ditto.
        (vint16mf2x3_t): Ditto.
        (vuint16mf2x3_t): Ditto.
        (vint16mf2x4_t): Ditto.
        (vuint16mf2x4_t): Ditto.
        (vint16mf2x5_t): Ditto.
        (vuint16mf2x5_t): Ditto.
        (vint16mf2x6_t): Ditto.
        (vuint16mf2x6_t): Ditto.
        (vint16mf2x7_t): Ditto.
        (vuint16mf2x7_t): Ditto.
        (vint16mf2x8_t): Ditto.
        (vuint16mf2x8_t): Ditto.
        (vint16m1x2_t): Ditto.
        (vuint16m1x2_t): Ditto.
        (vint16m1x3_t): Ditto.
        (vuint16m1x3_t): Ditto.
        (vint16m1x4_t): Ditto.
        (vuint16m1x4_t): Ditto.
        (vint16m1x5_t): Ditto.
        (vuint16m1x5_t): Ditto.
        (vint16m1x6_t): Ditto.
        (vuint16m1x6_t): Ditto.
        (vint16m1x7_t): Ditto.
        (vuint16m1x7_t): Ditto.
        (vint16m1x8_t): Ditto.
        (vuint16m1x8_t): Ditto.
        (vint16m2x2_t): Ditto.
        (vuint16m2x2_t): Ditto.
        (vint16m2x3_t): Ditto.
        (vuint16m2x3_t): Ditto.
        (vint16m2x4_t): Ditto.
        (vuint16m2x4_t): Ditto.
        (vint16m4x2_t): Ditto.
        (vuint16m4x2_t): Ditto.
        (vint32mf2x2_t): Ditto.
        (vuint32mf2x2_t): Ditto.
        (vint32mf2x3_t): Ditto.
        (vuint32mf2x3_t): Ditto.
        (vint32mf2x4_t): Ditto.
        (vuint32mf2x4_t): Ditto.
        (vint32mf2x5_t): Ditto.
        (vuint32mf2x5_t): Ditto.
        (vint32mf2x6_t): Ditto.
        (vuint32mf2x6_t): Ditto.
        (vint32mf2x7_t): Ditto.
        (vuint32mf2x7_t): Ditto.
        (vint32mf2x8_t): Ditto.
        (vuint32mf2x8_t): Ditto.
        (vint32m1x2_t): Ditto.
        (vuint32m1x2_t): Ditto.
        (vint32m1x3_t): Ditto.
        (vuint32m1x3_t): Ditto.
        (vint32m1x4_t): Ditto.
        (vuint32m1x4_t): Ditto.
        (vint32m1x5_t): Ditto.
        (vuint32m1x5_t): Ditto.
        (vint32m1x6_t): Ditto.
        (vuint32m1x6_t): Ditto.
        (vint32m1x7_t): Ditto.
        (vuint32m1x7_t): Ditto.
        (vint32m1x8_t): Ditto.
        (vuint32m1x8_t): Ditto.
        (vint32m2x2_t): Ditto.
        (vuint32m2x2_t): Ditto.
        (vint32m2x3_t): Ditto.
        (vuint32m2x3_t): Ditto.
        (vint32m2x4_t): Ditto.
        (vuint32m2x4_t): Ditto.
        (vint32m4x2_t): Ditto.
        (vuint32m4x2_t): Ditto.
        (vint64m1x2_t): Ditto.
        (vuint64m1x2_t): Ditto.
        (vint64m1x3_t): Ditto.
        (vuint64m1x3_t): Ditto.
        (vint64m1x4_t): Ditto.
        (vuint64m1x4_t): Ditto.
        (vint64m1x5_t): Ditto.
        (vuint64m1x5_t): Ditto.
        (vint64m1x6_t): Ditto.
        (vuint64m1x6_t): Ditto.
        (vint64m1x7_t): Ditto.
        (vuint64m1x7_t): Ditto.
        (vint64m1x8_t): Ditto.
        (vuint64m1x8_t): Ditto.
        (vint64m2x2_t): Ditto.
        (vuint64m2x2_t): Ditto.
        (vint64m2x3_t): Ditto.
        (vuint64m2x3_t): Ditto.
        (vint64m2x4_t): Ditto.
        (vuint64m2x4_t): Ditto.
        (vint64m4x2_t): Ditto.
        (vuint64m4x2_t): Ditto.
        (vfloat32mf2x2_t): Ditto.
        (vfloat32mf2x3_t): Ditto.
        (vfloat32mf2x4_t): Ditto.
        (vfloat32mf2x5_t): Ditto.
        (vfloat32mf2x6_t): Ditto.
        (vfloat32mf2x7_t): Ditto.
        (vfloat32mf2x8_t): Ditto.
        (vfloat32m1x2_t): Ditto.
        (vfloat32m1x3_t): Ditto.
        (vfloat32m1x4_t): Ditto.
        (vfloat32m1x5_t): Ditto.
        (vfloat32m1x6_t): Ditto.
        (vfloat32m1x7_t): Ditto.
        (vfloat32m1x8_t): Ditto.
        (vfloat32m2x2_t): Ditto.
        (vfloat32m2x3_t): Ditto.
        (vfloat32m2x4_t): Ditto.
        (vfloat32m4x2_t): Ditto.
        (vfloat64m1x2_t): Ditto.
        (vfloat64m1x3_t): Ditto.
        (vfloat64m1x4_t): Ditto.
        (vfloat64m1x5_t): Ditto.
        (vfloat64m1x6_t): Ditto.
        (vfloat64m1x7_t): Ditto.
        (vfloat64m1x8_t): Ditto.
        (vfloat64m2x2_t): Ditto.
        (vfloat64m2x3_t): Ditto.
        (vfloat64m2x4_t): Ditto.
        (vfloat64m4x2_t): Ditto.
        * config/riscv/riscv-vector-builtins.h (DEF_RVV_TUPLE_TYPE): Ditto.
        * config/riscv/riscv-vector-switch.def (TUPLE_ENTRY): Ditto.
        * config/riscv/riscv.cc (riscv_v_ext_tuple_mode_p): New function.
        (TUPLE_ENTRY): Ditto.
        (riscv_v_ext_mode_p): New function.
        (riscv_v_adjust_nunits): Add tuple mode adjustment.
        (riscv_classify_address): Ditto.
        (riscv_binary_cost): Ditto.
        (riscv_rtx_costs): Ditto.
        (riscv_secondary_memory_needed): Ditto.
        (riscv_hard_regno_nregs): Ditto.
        (riscv_hard_regno_mode_ok): Ditto.
        (riscv_vector_mode_supported_p): Ditto.
        (riscv_regmode_natural_size): Ditto.
        (riscv_array_mode): New function.
        (TARGET_ARRAY_MODE): New target hook.
        * config/riscv/riscv.md: Add tuple modes.
        * config/riscv/vector-iterators.md: Ditto.
        * config/riscv/vector.md (mov<mode>): Add tuple modes data movement.
        (*mov<VT:mode>_<P:mode>): Ditto.

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/rvv/base/abi-10.c: New test.
        * gcc.target/riscv/rvv/base/abi-11.c: New test.
        * gcc.target/riscv/rvv/base/abi-12.c: New test.
        * gcc.target/riscv/rvv/base/abi-13.c: New test.
        * gcc.target/riscv/rvv/base/abi-14.c: New test.
        * gcc.target/riscv/rvv/base/abi-15.c: New test.
        * gcc.target/riscv/rvv/base/abi-16.c: New test.
        * gcc.target/riscv/rvv/base/abi-8.c: New test.
        * gcc.target/riscv/rvv/base/abi-9.c: New test.
        * gcc.target/riscv/rvv/base/tuple-1.c: New test.
        * gcc.target/riscv/rvv/base/tuple-10.c: New test.
        * gcc.target/riscv/rvv/base/tuple-11.c: New test.
        * gcc.target/riscv/rvv/base/tuple-12.c: New test.
        * gcc.target/riscv/rvv/base/tuple-13.c: New test.
        * gcc.target/riscv/rvv/base/tuple-14.c: New test.
        * gcc.target/riscv/rvv/base/tuple-15.c: New test.
        * gcc.target/riscv/rvv/base/tuple-16.c: New test.
        * gcc.target/riscv/rvv/base/tuple-17.c: New test.
        * gcc.target/riscv/rvv/base/tuple-18.c: New test.
        * gcc.target/riscv/rvv/base/tuple-19.c: New test.
        * gcc.target/riscv/rvv/base/tuple-2.c: New test.
        * gcc.target/riscv/rvv/base/tuple-20.c: New test.
        * gcc.target/riscv/rvv/base/tuple-21.c: New test.
        * gcc.target/riscv/rvv/base/tuple-22.c: New test.
        * gcc.target/riscv/rvv/base/tuple-23.c: New test.
        * gcc.target/riscv/rvv/base/tuple-24.c: New test.
        * gcc.target/riscv/rvv/base/tuple-25.c: New test.
        * gcc.target/riscv/rvv/base/tuple-26.c: New test.
        * gcc.target/riscv/rvv/base/tuple-27.c: New test.
        * gcc.target/riscv/rvv/base/tuple-3.c: New test.
        * gcc.target/riscv/rvv/base/tuple-4.c: New test.
        * gcc.target/riscv/rvv/base/tuple-5.c: New test.
        * gcc.target/riscv/rvv/base/tuple-6.c: New test.
        * gcc.target/riscv/rvv/base/tuple-7.c: New test.
        * gcc.target/riscv/rvv/base/tuple-8.c: New test.
        * gcc.target/riscv/rvv/base/tuple-9.c: New test.
        * gcc.target/riscv/rvv/base/user-10.c: New test.
        * gcc.target/riscv/rvv/base/user-11.c: New test.
        * gcc.target/riscv/rvv/base/user-12.c: New test.
        * gcc.target/riscv/rvv/base/user-13.c: New test.
        * gcc.target/riscv/rvv/base/user-14.c: New test.
        * gcc.target/riscv/rvv/base/user-15.c: New test.
        * gcc.target/riscv/rvv/base/user-7.c: New test.
        * gcc.target/riscv/rvv/base/user-8.c: New test.
        * gcc.target/riscv/rvv/base/user-9.c: New test.

---
 gcc/config/riscv/riscv-modes.def              | 133 ++++++++++
 gcc/config/riscv/riscv-protos.h               |   5 +
 gcc/config/riscv/riscv-v.cc                   | 188 +++++++++++++-
 gcc/config/riscv/riscv-vector-builtins.cc     |  78 ++++++
 gcc/config/riscv/riscv-vector-builtins.def    | 237 ++++++++++++++++++
 gcc/config/riscv/riscv-vector-builtins.h      |   1 +
 gcc/config/riscv/riscv-vector-switch.def      | 176 +++++++++++++
 gcc/config/riscv/riscv.cc                     | 101 ++++++--
 gcc/config/riscv/riscv.md                     |  27 +-
 gcc/config/riscv/vector-iterators.md          | 186 ++++++++++++++
 gcc/config/riscv/vector.md                    |  44 ++++
 .../gcc.target/riscv/rvv/base/abi-10.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-11.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-12.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-13.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-14.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-15.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-16.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-8.c         | 205 +++++++++++++++
 .../gcc.target/riscv/rvv/base/abi-9.c         | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/tuple-1.c       | 108 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-10.c      |  51 ++++
 .../gcc.target/riscv/rvv/base/tuple-11.c      |  23 ++
 .../gcc.target/riscv/rvv/base/tuple-12.c      | 108 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-13.c      | 107 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-14.c      |  51 ++++
 .../gcc.target/riscv/rvv/base/tuple-15.c      |  23 ++
 .../gcc.target/riscv/rvv/base/tuple-16.c      | 107 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-17.c      |  51 ++++
 .../gcc.target/riscv/rvv/base/tuple-18.c      |  23 ++
 .../gcc.target/riscv/rvv/base/tuple-19.c      |  59 +++++
 .../gcc.target/riscv/rvv/base/tuple-2.c       | 108 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-20.c      |  58 +++++
 .../gcc.target/riscv/rvv/base/tuple-21.c      |  30 +++
 .../gcc.target/riscv/rvv/base/tuple-22.c      |  16 ++
 .../gcc.target/riscv/rvv/base/tuple-23.c      |  58 +++++
 .../gcc.target/riscv/rvv/base/tuple-24.c      |  30 +++
 .../gcc.target/riscv/rvv/base/tuple-25.c      |  16 ++
 .../gcc.target/riscv/rvv/base/tuple-26.c      |  34 +++
 .../gcc.target/riscv/rvv/base/tuple-27.c      |  29 +++
 .../gcc.target/riscv/rvv/base/tuple-3.c       | 108 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-4.c       | 107 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-5.c       |  51 ++++
 .../gcc.target/riscv/rvv/base/tuple-6.c       |  23 ++
 .../gcc.target/riscv/rvv/base/tuple-7.c       | 108 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-8.c       | 108 ++++++++
 .../gcc.target/riscv/rvv/base/tuple-9.c       | 107 ++++++++
 .../gcc.target/riscv/rvv/base/user-10.c       | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-11.c       | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-12.c       | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-13.c       | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-14.c       | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-15.c       | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-7.c        | 204 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-8.c        | 206 +++++++++++++++
 .../gcc.target/riscv/rvv/base/user-9.c        | 206 +++++++++++++++
 56 files changed, 6548 insertions(+), 19 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-10.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-11.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-12.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-13.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-15.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/abi-9.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-10.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-11.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-12.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-13.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-14.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-15.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-16.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-17.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-18.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-19.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-20.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-21.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-22.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-23.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-24.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-25.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-26.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-27.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-5.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-6.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/tuple-9.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-10.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-11.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-12.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-13.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-14.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-15.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/user-9.c

diff --git a/gcc/config/riscv/riscv-modes.def b/gcc/config/riscv/riscv-modes.def
index b1669609eec..19a4f9fb3db 100644
--- a/gcc/config/riscv/riscv-modes.def
+++ b/gcc/config/riscv/riscv-modes.def
@@ -185,6 +185,139 @@ VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 1, 0);
 ADJUST_NUNITS (VNx1QI, riscv_v_adjust_nunits (VNx1QImode, 1));
 ADJUST_ALIGNMENT (VNx1QI, 1);
 
+/* Tuple modes for segment loads/stores according to NF, NF value can be 2 ~ 8.  */
+
+/*
+   | Mode           | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
+   |                | LMUL        | SEW/LMUL    | LMUL        | SEW/LMUL    | LMUL         | SEW/LMUL     |
+   | VNxNFx1QI      | MF4         | 32          | MF8         | 64          | N/A          | N/A          |
+   | VNxNFx2QI      | MF2         | 16          | MF4         | 32          | MF8          | 64           |
+   | VNxNFx4QI      | M1          | 8           | MF2         | 16          | MF4          | 32           |
+   | VNxNFx8QI      | M2          | 4           | M1          | 8           | MF2          | 16           |
+   | VNxNFx16QI     | M4          | 2           | M2          | 4           | M1           | 8            |
+   | VNxNFx32QI     | M8          | 1           | M4          | 2           | M2           | 4            |
+   | VNxNFx64QI     | N/A         | N/A         | M8          | 1           | M4           | 2            |
+   | VNxNFx128QI    | N/A         | N/A         | N/A         | N/A         | M8           | 1            |
+   | VNxNFx1(HI|HF) | MF2         | 32          | MF4         | 64          | N/A          | N/A          |
+   | VNxNFx2(HI|HF) | M1          | 16          | MF2         | 32          | MF4          | 64           |
+   | VNxNFx4(HI|HF) | M2          | 8           | M1          | 16          | MF2          | 32           |
+   | VNxNFx8(HI|HF) | M4          | 4           | M2          | 8           | M1           | 16           |
+   | VNxNFx16(HI|HF)| M8          | 2           | M4          | 4           | M2           | 8            |
+   | VNxNFx32(HI|HF)| N/A         | N/A         | M8          | 2           | M4           | 4            |
+   | VNxNFx64(HI|HF)| N/A         | N/A         | N/A         | N/A         | M8           | 2            |
+   | VNxNFx1(SI|SF) | M1          | 32          | MF2         | 64          | MF2          | 64           |
+   | VNxNFx2(SI|SF) | M2          | 16          | M1          | 32          | M1           | 32           |
+   | VNxNFx4(SI|SF) | M4          | 8           | M2          | 16          | M2           | 16           |
+   | VNxNFx8(SI|SF) | M8          | 4           | M4          | 8           | M4           | 8            |
+   | VNxNFx16(SI|SF)| N/A         | N/A         | M8          | 4           | M8           | 4            |
+   | VNxNFx1(DI|DF) | N/A         | N/A         | M1          | 64          | N/A          | N/A          |
+   | VNxNFx2(DI|DF) | N/A         | N/A         | M2          | 32          | M1           | 64           |
+   | VNxNFx4(DI|DF) | N/A         | N/A         | M4          | 16          | M2           | 32           |
+   | VNxNFx8(DI|DF) | N/A         | N/A         | M8          | 8           | M4           | 16           |
+   | VNxNFx16(DI|DF)| N/A         | N/A         | N/A         | N/A         | M8           | 8            |
+*/
+
+#define RVV_TUPLE_MODES(NBYTES, NSUBPARTS, VB, VH, VS, VD)                     \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, NBYTES, 1);             \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, NBYTES / 2, 1);         \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, NBYTES / 4, 1);         \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, NBYTES / 4, 1);       \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, DI, NBYTES / 8, 1);         \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, DF, NBYTES / 8, 1);       \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VB##QI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VB##QI##mode,       \
+					VB * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HI##mode,       \
+					VH * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SI##mode,       \
+					VS * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DI##mode,       \
+					VD * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SF,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SF##mode,       \
+					VS * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DF,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DF##mode,       \
+					VD * NSUBPARTS));                      \
+                                                                               \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VB##QI, 1);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HI, 2);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SI, 4);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DI, 8);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SF, 4);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DF, 8);
+
+RVV_TUPLE_MODES (8, 2, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 3, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 4, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 5, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 6, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 7, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 8, 8, 4, 2, 1)
+
+RVV_TUPLE_MODES (16, 2, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 3, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 4, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 5, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 6, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 7, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 8, 16, 8, 4, 2)
+
+RVV_TUPLE_MODES (32, 2, 32, 16, 8, 4)
+RVV_TUPLE_MODES (32, 3, 32, 16, 8, 4)
+RVV_TUPLE_MODES (32, 4, 32, 16, 8, 4)
+
+RVV_TUPLE_MODES (64, 2, 64, 32, 16, 8)
+
+#define RVV_TUPLE_PARTIAL_MODES(NSUBPARTS)                                     \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 1, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 1, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, 1, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, 1, 1);                \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 2, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 2, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 4, 1);                  \
+                                                                               \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1QI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1QI##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1HI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HI##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1SI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SI##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1SF,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SF##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x2QI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x2QI##mode,            \
+					2 * NSUBPARTS));                       \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x2HI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HI##mode,            \
+					2 * NSUBPARTS));                       \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x4QI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x4QI##mode,            \
+					4 * NSUBPARTS));                       \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1QI, 1);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HI, 2);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SI, 4);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SF, 4);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2QI, 1);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HI, 2);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x4QI, 1);
+
+RVV_TUPLE_PARTIAL_MODES (2)
+RVV_TUPLE_PARTIAL_MODES (3)
+RVV_TUPLE_PARTIAL_MODES (4)
+RVV_TUPLE_PARTIAL_MODES (5)
+RVV_TUPLE_PARTIAL_MODES (6)
+RVV_TUPLE_PARTIAL_MODES (7)
+RVV_TUPLE_PARTIAL_MODES (8)
+
 /* TODO: According to RISC-V 'V' ISA spec, the maximun vector length can
    be 65536 for a single vector register which means the vector mode in
    GCC can be maximum = 65536 * 8 bits (LMUL=8).
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 5244e8dcbf0..96ab8dd3629 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -78,6 +78,7 @@ extern bool riscv_gpr_save_operation_p (rtx);
 extern void riscv_reinit (void);
 extern poly_uint64 riscv_regmode_natural_size (machine_mode);
 extern bool riscv_v_ext_vector_mode_p (machine_mode);
+extern bool riscv_v_ext_tuple_mode_p (machine_mode);
 extern bool riscv_shamt_matches_mask_p (int, HOST_WIDE_INT);
 
 /* Routines implemented in riscv-c.cc.  */
@@ -165,6 +166,8 @@ void emit_vlmax_op (unsigned, rtx, rtx, rtx, machine_mode);
 void emit_nonvlmax_op (unsigned, rtx, rtx, rtx, machine_mode);
 enum vlmul_type get_vlmul (machine_mode);
 unsigned int get_ratio (machine_mode);
+unsigned int get_nf (machine_mode);
+machine_mode get_subpart_mode (machine_mode);
 int get_ta (rtx);
 int get_ma (rtx);
 int get_avl_type (rtx);
@@ -186,6 +189,7 @@ enum tail_policy get_prefer_tail_policy ();
 enum mask_policy get_prefer_mask_policy ();
 rtx get_avl_type_rtx (enum avl_type);
 opt_machine_mode get_vector_mode (scalar_mode, poly_uint64);
+opt_machine_mode get_tuple_mode (machine_mode, unsigned int);
 bool simm5_p (rtx);
 bool neg_simm5_p (rtx);
 #ifdef RTX_CODE
@@ -207,6 +211,7 @@ enum vlen_enum
 bool slide1_sew64_helper (int, machine_mode, machine_mode,
 			  machine_mode, rtx *);
 rtx gen_avl_for_scalar_move (rtx);
+void expand_tuple_move (machine_mode, rtx *);
 }
 
 /* We classify builtin types into two classes:
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 99c414cc910..3950aa80338 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -342,17 +342,32 @@ struct mode_vtype_group
   uint8_t ratio_for_min_vlen64[NUM_MACHINE_MODES];
   enum vlmul_type vlmul_for_for_vlen128[NUM_MACHINE_MODES];
   uint8_t ratio_for_for_vlen128[NUM_MACHINE_MODES];
+  machine_mode subpart_mode[NUM_MACHINE_MODES];
+  uint8_t nf[NUM_MACHINE_MODES];
   mode_vtype_group ()
   {
 #define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLEN32,   \
 	      VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64,                      \
-	      VLMUL_FOR_FOR_VLEN128, RATIO_FOR_FOR_VLEN128)                    \
+	      VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)                    \
   vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;                     \
   ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;                     \
   vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;                     \
   ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;                     \
-  vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_FOR_VLEN128;                   \
-  ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_FOR_VLEN128;
+  vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;                   \
+  ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
+#include "riscv-vector-switch.def"
+#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
+		    RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64,                \
+		    RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128,               \
+		    RATIO_FOR_MIN_VLEN128)                                     \
+  subpart_mode[MODE##mode] = SUBPART_MODE##mode;                               \
+  nf[MODE##mode] = NF;                                                         \
+  vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;                     \
+  ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;                     \
+  vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;                     \
+  ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;                     \
+  vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;                   \
+  ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
 #include "riscv-vector-switch.def"
   }
 };
@@ -371,6 +386,26 @@ get_vlmul (machine_mode mode)
     return mode_vtype_infos.vlmul_for_min_vlen64[mode];
 }
 
+/* Return the NF value of the corresponding mode.  */
+unsigned int
+get_nf (machine_mode mode)
+{
+  /* We don't allow non-tuple modes go through this function.  */
+  gcc_assert (riscv_v_ext_tuple_mode_p (mode));
+  return mode_vtype_infos.nf[mode];
+}
+
+/* Return the subpart mode of the tuple mode. For VNx2x1SImode,
+   the subpart mode is VNx1SImode. This will help to build
+   array/struct type in builtins.  */
+machine_mode
+get_subpart_mode (machine_mode mode)
+{
+  /* We don't allow non-tuple modes go through this function.  */
+  gcc_assert (riscv_v_ext_tuple_mode_p (mode));
+  return mode_vtype_infos.subpart_mode[mode];
+}
+
 /* Get ratio according to machine mode.  */
 unsigned int
 get_ratio (machine_mode mode)
@@ -452,6 +487,24 @@ get_vector_mode (scalar_mode inner_mode, poly_uint64 nunits)
   return opt_machine_mode ();
 }
 
+/* Return the RVV tuple mode if we can find the legal tuple mode for the
+   corresponding subpart mode and NF.  */
+opt_machine_mode
+get_tuple_mode (machine_mode subpart_mode, unsigned int nf)
+{
+  poly_uint64 nunits = GET_MODE_NUNITS (subpart_mode) * nf;
+  scalar_mode inner_mode = GET_MODE_INNER (subpart_mode);
+  enum mode_class mclass = GET_MODE_CLASS (subpart_mode);
+  machine_mode mode;
+  FOR_EACH_MODE_IN_CLASS (mode, mclass)
+    if (inner_mode == GET_MODE_INNER (mode)
+	&& known_eq (nunits, GET_MODE_NUNITS (mode))
+	&& riscv_v_ext_tuple_mode_p (mode)
+	&& get_subpart_mode (mode) == subpart_mode)
+      return mode;
+  return opt_machine_mode ();
+}
+
 bool
 simm5_p (rtx x)
 {
@@ -742,4 +795,133 @@ gen_avl_for_scalar_move (rtx avl)
     }
 }
 
+/* Expand tuple modes data movement for.  */
+void
+expand_tuple_move (machine_mode mask_mode, rtx *ops)
+{
+  unsigned int i;
+  machine_mode tuple_mode = GET_MODE (ops[0]);
+  machine_mode subpart_mode = get_subpart_mode (tuple_mode);
+  poly_int64 subpart_size = GET_MODE_SIZE (subpart_mode);
+  unsigned int nf = get_nf (tuple_mode);
+  bool fractional_p = known_lt (subpart_size, BYTES_PER_RISCV_VECTOR);
+
+  if (REG_P (ops[0]) && CONST_VECTOR_P (ops[1]))
+    {
+      rtx val;
+      gcc_assert (can_create_pseudo_p ()
+		  && const_vec_duplicate_p (ops[1], &val));
+      for (i = 0; i < nf; ++i)
+	{
+	  poly_int64 offset = i * subpart_size;
+	  rtx subreg
+	    = simplify_gen_subreg (subpart_mode, ops[0], tuple_mode, offset);
+	  rtx dup = gen_const_vec_duplicate (subpart_mode, val);
+	  emit_move_insn (subreg, dup);
+	}
+    }
+  else if (REG_P (ops[0]) && REG_P (ops[1]))
+    {
+      for (i = 0; i < nf; ++i)
+	{
+	  int index = i;
+
+	  /* Take NF = 2 and LMUL = 1 for example:
+
+	      - move v8 to v9:
+		 vmv1r v10,v9
+		 vmv1r v9,v8
+
+	      - move v8 to v7:
+		 vmv1r v7,v8
+		 vmv1r v8,v9  */
+	  if (REGNO (ops[0]) > REGNO (ops[1]))
+	    index = nf - 1 - i;
+	  poly_int64 offset = index * subpart_size;
+	  rtx dst_subreg
+	    = simplify_gen_subreg (subpart_mode, ops[0], tuple_mode, offset);
+	  rtx src_subreg
+	    = simplify_gen_subreg (subpart_mode, ops[1], tuple_mode, offset);
+	  emit_insn (gen_rtx_SET (dst_subreg, src_subreg));
+	}
+    }
+  else
+    {
+      /* Expand tuple memory data movement.  */
+      gcc_assert (MEM_P (ops[0]) || MEM_P (ops[1]));
+      rtx offset = gen_int_mode (subpart_size, Pmode);
+      if (!subpart_size.is_constant ())
+	{
+	  emit_move_insn (ops[2], gen_int_mode (BYTES_PER_RISCV_VECTOR, Pmode));
+	  if (fractional_p)
+	    {
+	      unsigned int factor
+		= exact_div (BYTES_PER_RISCV_VECTOR, subpart_size)
+		    .to_constant ();
+	      rtx pat
+		= gen_rtx_ASHIFTRT (Pmode, ops[2],
+				    gen_int_mode (exact_log2 (factor), Pmode));
+	      emit_insn (gen_rtx_SET (ops[2], pat));
+	    }
+
+	  if (known_gt (subpart_size, BYTES_PER_RISCV_VECTOR))
+	    {
+	      unsigned int factor
+		= exact_div (subpart_size, BYTES_PER_RISCV_VECTOR)
+		    .to_constant ();
+	      rtx pat
+		= gen_rtx_ASHIFT (Pmode, ops[2],
+				  gen_int_mode (exact_log2 (factor), Pmode));
+	      emit_insn (gen_rtx_SET (ops[2], pat));
+	    }
+	  offset = ops[2];
+	}
+
+      if (MEM_P (ops[1]))
+	{
+	  /* Load operations.  */
+	  emit_move_insn (ops[3], XEXP (ops[1], 0));
+	  for (i = 0; i < nf; i++)
+	    {
+	      rtx subreg = simplify_gen_subreg (subpart_mode, ops[0],
+						tuple_mode, i * subpart_size);
+	      if (i != 0)
+		{
+		  rtx new_addr = gen_rtx_PLUS (Pmode, ops[3], offset);
+		  emit_insn (gen_rtx_SET (ops[3], new_addr));
+		}
+	      rtx mem = gen_rtx_MEM (subpart_mode, ops[3]);
+
+	      if (fractional_p)
+		emit_vlmax_op (code_for_pred_mov (subpart_mode), subreg, mem,
+			       ops[4], mask_mode);
+	      else
+		emit_move_insn (subreg, mem);
+	    }
+	}
+      else
+	{
+	  /* Store operations.  */
+	  emit_move_insn (ops[3], XEXP (ops[0], 0));
+	  for (i = 0; i < nf; i++)
+	    {
+	      rtx subreg = simplify_gen_subreg (subpart_mode, ops[1],
+						tuple_mode, i * subpart_size);
+	      if (i != 0)
+		{
+		  rtx new_addr = gen_rtx_PLUS (Pmode, ops[3], offset);
+		  emit_insn (gen_rtx_SET (ops[3], new_addr));
+		}
+	      rtx mem = gen_rtx_MEM (subpart_mode, ops[3]);
+
+	      if (fractional_p)
+		emit_vlmax_op (code_for_pred_mov (subpart_mode), mem, subreg,
+			       ops[4], mask_mode);
+	      else
+		emit_move_insn (mem, subreg);
+	    }
+	}
+    }
+}
+
 } // namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 434bd8e157b..3cfa9c90181 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -95,6 +95,8 @@ struct registered_function_hasher : nofree_ptr_hash<registered_function>
 static CONSTEXPR const vector_type_info vector_types[] = {
 #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, ARGS...)                          \
   {#NAME, #ABI_NAME, "u" #NCHARS #ABI_NAME},
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, ARGS...)                    \
+  {#NAME, #ABI_NAME, "u" #NCHARS #ABI_NAME},
 #include "riscv-vector-builtins.def"
 };
 
@@ -112,6 +114,9 @@ const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
 		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
 		     VSETVL_SUFFIX)                                            \
   {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE,  \
+			   NF, VECTOR_SUFFIX)                                  \
+  {#VECTOR_SUFFIX, "", ""},
 #include "riscv-vector-builtins.def"
 };
 
@@ -2336,6 +2341,75 @@ register_builtin_type (vector_type_index type, tree eltype, machine_mode mode)
   lang_hooks.types.register_builtin_type (vectype, vector_types[type].abi_name);
 }
 
+/* Register the tuple type that contains NUM_VECTORS vectors of type TYPE.  */
+static void
+register_tuple_type (vector_type_index type, vector_type_index subpart_type,
+		     tree eltype, unsigned int nf)
+{
+  /* TODO: We currently just skip the register of the illegal RVV type.
+    Ideally, we should report error message more friendly instead of
+    reporting "unknown" type. Support more friendly error message in
+    the future.  */
+  if (!abi_vector_types[subpart_type])
+    return;
+  tree tuple_type = lang_hooks.types.make_type (RECORD_TYPE);
+
+  /* The contents of the type are opaque, so we can define them in any
+     way that maps to the correct ABI type.
+
+     Here we choose to use the same layout as for riscv_vector.h, with
+     "__val":
+
+	struct vfooxN_t { vfoo_t __val[N]; };
+
+     (It wouldn't be possible to write that directly in C or C++ for
+     sizeless types, but that's not a problem for this function.)
+
+     Using arrays simplifies the handling of vget and vset for variable
+     arguments.  */
+  tree array_type = build_array_type_nelts (abi_vector_types[subpart_type], nf);
+  gcc_assert (array_type);
+  gcc_assert (VECTOR_MODE_P (TYPE_MODE (array_type))
+	      && TYPE_MODE_RAW (array_type) == TYPE_MODE (array_type));
+
+  tree field = build_decl (input_location, FIELD_DECL, get_identifier ("__val"),
+			   array_type);
+  DECL_FIELD_CONTEXT (field) = tuple_type;
+  TYPE_FIELDS (tuple_type) = field;
+  add_vector_type_attribute (tuple_type, vector_types[type].mangled_name);
+  make_type_sizeless (tuple_type);
+  layout_type (tuple_type);
+  gcc_assert (VECTOR_MODE_P (TYPE_MODE (tuple_type))
+	      && TYPE_MODE_RAW (tuple_type) == TYPE_MODE (tuple_type));
+
+  tree decl
+    = build_decl (input_location, TYPE_DECL,
+		  get_identifier (vector_types[type].abi_name), tuple_type);
+  TYPE_NAME (tuple_type) = decl;
+  TYPE_STUB_DECL (tuple_type) = decl;
+  lang_hooks.decls.pushdecl (decl);
+  /* ??? Undo the effect of set_underlying_type for C.  The C frontend
+     doesn't recognize DECL as a built-in because (as intended) the decl has
+     a real location instead of BUILTINS_LOCATION.  The frontend therefore
+     treats the decl like a normal C "typedef struct foo foo;", expecting
+     the type for tag "struct foo" to have a dummy unnamed TYPE_DECL instead
+     of the named one we attached above.  It then sets DECL_ORIGINAL_TYPE
+     on the supposedly unnamed decl, creating a circularity that upsets
+     dwarf2out.
+
+     We don't want to follow the normal C model and create "struct foo"
+     tags for tuple types since (a) the types are supposed to be opaque
+     and (b) they couldn't be defined as a real struct anyway.  Treating
+     the TYPE_DECLs as "typedef struct foo foo;" without creating
+     "struct foo" would lead to confusing error messages.  */
+  DECL_ORIGINAL_TYPE (decl) = NULL_TREE;
+
+  builtin_types[type].scalar = eltype;
+  builtin_types[type].scalar_ptr = build_pointer_type (eltype);
+  builtin_types[type].scalar_const_ptr = build_const_pointer (eltype);
+  abi_vector_types[type] = tuple_type;
+}
+
 /* Register the built-in RVV ABI types, such as __rvv_int32m1_t.  */
 static void
 register_builtin_types ()
@@ -2358,6 +2432,10 @@ register_builtin_types ()
 	 : TARGET_MIN_VLEN >= 64 ? VECTOR_MODE_MIN_VLEN_64##mode               \
 				 : VECTOR_MODE_MIN_VLEN_32##mode;              \
   register_builtin_type (VECTOR_TYPE_##NAME, SCALAR_TYPE##_type_node, mode);
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE,  \
+			   NF, VECTOR_SUFFIX)                                  \
+  register_tuple_type (VECTOR_TYPE_##NAME, VECTOR_TYPE_##SUBPART_TYPE,         \
+		       SCALAR_TYPE##_type_node, NF);
 #include "riscv-vector-builtins.def"
 }
 
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index 64c09b5d8cb..b0d6edda1b6 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -48,6 +48,11 @@ along with GCC; see the file COPYING3.  If not see
 		     VSETVL_SUFFIX)
 #endif
 
+#ifndef DEF_RVV_TUPLE_TYPE
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE,  \
+			   NF, VECTOR_SUFFIX)
+#endif
+
 /* Use "DEF_RVV_OP_TYPE" macro to define RVV operand types.
    The 'NAME' will be concatenated into intrinsic function name.  */
 #ifndef DEF_RVV_OP_TYPE
@@ -323,6 +328,237 @@ DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx8DF, VNx4DF, VOID,
 DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx16DF, VNx8DF, VOID, _f64m8,
 	      _f64, _e64m8)
 
+/* Define tuple type for segment loads/stores, each tuple type should always satisfy
+   naming with vint<SEW><LMUL>x<NF>_t. Note that it's always LMUL * NF <= 8.  */
+/* Define tuple types for SEW = 8, LMUL = MF8.  */
+DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t, int8, 2, _i8mf8x2)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t, uint8, 2, _u8mf8x2)
+DEF_RVV_TUPLE_TYPE (vint8mf8x3_t, 17, __rvv_int8mf8x3_t, vint8mf8_t, int8, 3, _i8mf8x3)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x3_t, 18, __rvv_uint8mf8x3_t, vuint8mf8_t, uint8, 3, _u8mf8x3)
+DEF_RVV_TUPLE_TYPE (vint8mf8x4_t, 17, __rvv_int8mf8x4_t, vint8mf8_t, int8, 4, _i8mf8x4)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x4_t, 18, __rvv_uint8mf8x4_t, vuint8mf8_t, uint8, 4, _u8mf8x4)
+DEF_RVV_TUPLE_TYPE (vint8mf8x5_t, 17, __rvv_int8mf8x5_t, vint8mf8_t, int8, 5, _i8mf8x5)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x5_t, 18, __rvv_uint8mf8x5_t, vuint8mf8_t, uint8, 5, _u8mf8x5)
+DEF_RVV_TUPLE_TYPE (vint8mf8x6_t, 17, __rvv_int8mf8x6_t, vint8mf8_t, int8, 6, _i8mf8x6)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x6_t, 18, __rvv_uint8mf8x6_t, vuint8mf8_t, uint8, 6, _u8mf8x6)
+DEF_RVV_TUPLE_TYPE (vint8mf8x7_t, 17, __rvv_int8mf8x7_t, vint8mf8_t, int8, 7, _i8mf8x7)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18, __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7, _u8mf8x7)
+DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t, int8, 8, _i8mf8x8)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t, uint8, 8, _u8mf8x8)
+/* Define tuple types for SEW = 8, LMUL = MF4.  */
+DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t, int8, 2, _i8mf4x2)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t, uint8, 2, _u8mf4x2)
+DEF_RVV_TUPLE_TYPE (vint8mf4x3_t, 17, __rvv_int8mf4x3_t, vint8mf4_t, int8, 3, _i8mf4x3)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x3_t, 18, __rvv_uint8mf4x3_t, vuint8mf4_t, uint8, 3, _u8mf4x3)
+DEF_RVV_TUPLE_TYPE (vint8mf4x4_t, 17, __rvv_int8mf4x4_t, vint8mf4_t, int8, 4, _i8mf4x4)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x4_t, 18, __rvv_uint8mf4x4_t, vuint8mf4_t, uint8, 4, _u8mf4x4)
+DEF_RVV_TUPLE_TYPE (vint8mf4x5_t, 17, __rvv_int8mf4x5_t, vint8mf4_t, int8, 5, _i8mf4x5)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x5_t, 18, __rvv_uint8mf4x5_t, vuint8mf4_t, uint8, 5, _u8mf4x5)
+DEF_RVV_TUPLE_TYPE (vint8mf4x6_t, 17, __rvv_int8mf4x6_t, vint8mf4_t, int8, 6, _i8mf4x6)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x6_t, 18, __rvv_uint8mf4x6_t, vuint8mf4_t, uint8, 6, _u8mf4x6)
+DEF_RVV_TUPLE_TYPE (vint8mf4x7_t, 17, __rvv_int8mf4x7_t, vint8mf4_t, int8, 7, _i8mf4x7)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18, __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7, _u8mf4x7)
+DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t, int8, 8, _i8mf4x8)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t, uint8, 8, _u8mf4x8)
+/* Define tuple types for SEW = 8, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t, int8, 2, _i8mf2x2)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t, uint8, 2, _u8mf2x2)
+DEF_RVV_TUPLE_TYPE (vint8mf2x3_t, 17, __rvv_int8mf2x3_t, vint8mf2_t, int8, 3, _i8mf2x3)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x3_t, 18, __rvv_uint8mf2x3_t, vuint8mf2_t, uint8, 3, _u8mf2x3)
+DEF_RVV_TUPLE_TYPE (vint8mf2x4_t, 17, __rvv_int8mf2x4_t, vint8mf2_t, int8, 4, _i8mf2x4)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x4_t, 18, __rvv_uint8mf2x4_t, vuint8mf2_t, uint8, 4, _u8mf2x4)
+DEF_RVV_TUPLE_TYPE (vint8mf2x5_t, 17, __rvv_int8mf2x5_t, vint8mf2_t, int8, 5, _i8mf2x5)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x5_t, 18, __rvv_uint8mf2x5_t, vuint8mf2_t, uint8, 5, _u8mf2x5)
+DEF_RVV_TUPLE_TYPE (vint8mf2x6_t, 17, __rvv_int8mf2x6_t, vint8mf2_t, int8, 6, _i8mf2x6)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x6_t, 18, __rvv_uint8mf2x6_t, vuint8mf2_t, uint8, 6, _u8mf2x6)
+DEF_RVV_TUPLE_TYPE (vint8mf2x7_t, 17, __rvv_int8mf2x7_t, vint8mf2_t, int8, 7, _i8mf2x7)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18, __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7, _u8mf2x7)
+DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t, int8, 8, _i8mf2x8)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t, uint8, 8, _u8mf2x8)
+/* Define tuple types for SEW = 8, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8, 2, _i8m1x2)
+DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t, uint8, 2, _u8m1x2)
+DEF_RVV_TUPLE_TYPE (vint8m1x3_t, 16, __rvv_int8m1x3_t, vint8m1_t, int8, 3, _i8m1x3)
+DEF_RVV_TUPLE_TYPE (vuint8m1x3_t, 17, __rvv_uint8m1x3_t, vuint8m1_t, uint8, 3, _u8m1x3)
+DEF_RVV_TUPLE_TYPE (vint8m1x4_t, 16, __rvv_int8m1x4_t, vint8m1_t, int8, 4, _i8m1x4)
+DEF_RVV_TUPLE_TYPE (vuint8m1x4_t, 17, __rvv_uint8m1x4_t, vuint8m1_t, uint8, 4, _u8m1x4)
+DEF_RVV_TUPLE_TYPE (vint8m1x5_t, 16, __rvv_int8m1x5_t, vint8m1_t, int8, 5, _i8m1x5)
+DEF_RVV_TUPLE_TYPE (vuint8m1x5_t, 17, __rvv_uint8m1x5_t, vuint8m1_t, uint8, 5, _u8m1x5)
+DEF_RVV_TUPLE_TYPE (vint8m1x6_t, 16, __rvv_int8m1x6_t, vint8m1_t, int8, 6, _i8m1x6)
+DEF_RVV_TUPLE_TYPE (vuint8m1x6_t, 17, __rvv_uint8m1x6_t, vuint8m1_t, uint8, 6, _u8m1x6)
+DEF_RVV_TUPLE_TYPE (vint8m1x7_t, 16, __rvv_int8m1x7_t, vint8m1_t, int8, 7, _i8m1x7)
+DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17, __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _u8m1x7)
+DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8, 8, _i8m1x8)
+DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t, uint8, 8, _u8m1x8)
+/* Define tuple types for SEW = 8, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8, 2, _i8m2x2)
+DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t, uint8, 2, _u8m2x2)
+DEF_RVV_TUPLE_TYPE (vint8m2x3_t, 16, __rvv_int8m2x3_t, vint8m2_t, int8, 3, _i8m2x3)
+DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17, __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _u8m2x3)
+DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8, 4, _i8m2x4)
+DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t, uint8, 4, _u8m2x4)
+/* Define tuple types for SEW = 8, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8, 2, _i8m4x2)
+DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t, uint8, 2, _u8m4x2)
+/* Define tuple types for SEW = 16, LMUL = MF4.  */
+DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t, int16, 2, _i16mf4x2)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t, vuint16mf4_t, uint16, 2, _u16mf4x2)
+DEF_RVV_TUPLE_TYPE (vint16mf4x3_t, 18, __rvv_int16mf4x3_t, vint16mf4_t, int16, 3, _i16mf4x3)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x3_t, 19, __rvv_uint16mf4x3_t, vuint16mf4_t, uint16, 3, _u16mf4x3)
+DEF_RVV_TUPLE_TYPE (vint16mf4x4_t, 18, __rvv_int16mf4x4_t, vint16mf4_t, int16, 4, _i16mf4x4)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x4_t, 19, __rvv_uint16mf4x4_t, vuint16mf4_t, uint16, 4, _u16mf4x4)
+DEF_RVV_TUPLE_TYPE (vint16mf4x5_t, 18, __rvv_int16mf4x5_t, vint16mf4_t, int16, 5, _i16mf4x5)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x5_t, 19, __rvv_uint16mf4x5_t, vuint16mf4_t, uint16, 5, _u16mf4x5)
+DEF_RVV_TUPLE_TYPE (vint16mf4x6_t, 18, __rvv_int16mf4x6_t, vint16mf4_t, int16, 6, _i16mf4x6)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x6_t, 19, __rvv_uint16mf4x6_t, vuint16mf4_t, uint16, 6, _u16mf4x6)
+DEF_RVV_TUPLE_TYPE (vint16mf4x7_t, 18, __rvv_int16mf4x7_t, vint16mf4_t, int16, 7, _i16mf4x7)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19, __rvv_uint16mf4x7_t, vuint16mf4_t, uint16, 7, _u16mf4x7)
+DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t, int16, 8, _i16mf4x8)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t, vuint16mf4_t, uint16, 8, _u16mf4x8)
+/* Define tuple types for SEW = 16, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t, int16, 2, _i16mf2x2)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t, vuint16mf2_t, uint16, 2, _u16mf2x2)
+DEF_RVV_TUPLE_TYPE (vint16mf2x3_t, 18, __rvv_int16mf2x3_t, vint16mf2_t, int16, 3, _i16mf2x3)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x3_t, 19, __rvv_uint16mf2x3_t, vuint16mf2_t, uint16, 3, _u16mf2x3)
+DEF_RVV_TUPLE_TYPE (vint16mf2x4_t, 18, __rvv_int16mf2x4_t, vint16mf2_t, int16, 4, _i16mf2x4)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x4_t, 19, __rvv_uint16mf2x4_t, vuint16mf2_t, uint16, 4, _u16mf2x4)
+DEF_RVV_TUPLE_TYPE (vint16mf2x5_t, 18, __rvv_int16mf2x5_t, vint16mf2_t, int16, 5, _i16mf2x5)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x5_t, 19, __rvv_uint16mf2x5_t, vuint16mf2_t, uint16, 5, _u16mf2x5)
+DEF_RVV_TUPLE_TYPE (vint16mf2x6_t, 18, __rvv_int16mf2x6_t, vint16mf2_t, int16, 6, _i16mf2x6)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x6_t, 19, __rvv_uint16mf2x6_t, vuint16mf2_t, uint16, 6, _u16mf2x6)
+DEF_RVV_TUPLE_TYPE (vint16mf2x7_t, 18, __rvv_int16mf2x7_t, vint16mf2_t, int16, 7, _i16mf2x7)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19, __rvv_uint16mf2x7_t, vuint16mf2_t, uint16, 7, _u16mf2x7)
+DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t, int16, 8, _i16mf2x8)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t, vuint16mf2_t, uint16, 8, _u16mf2x8)
+/* Define tuple types for SEW = 16, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t, int16, 2, _i16m1x2)
+DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t, uint16, 2, _u16m1x2)
+DEF_RVV_TUPLE_TYPE (vint16m1x3_t, 17, __rvv_int16m1x3_t, vint16m1_t, int16, 3, _i16m1x3)
+DEF_RVV_TUPLE_TYPE (vuint16m1x3_t, 18, __rvv_uint16m1x3_t, vuint16m1_t, uint16, 3, _u16m1x3)
+DEF_RVV_TUPLE_TYPE (vint16m1x4_t, 17, __rvv_int16m1x4_t, vint16m1_t, int16, 4, _i16m1x4)
+DEF_RVV_TUPLE_TYPE (vuint16m1x4_t, 18, __rvv_uint16m1x4_t, vuint16m1_t, uint16, 4, _u16m1x4)
+DEF_RVV_TUPLE_TYPE (vint16m1x5_t, 17, __rvv_int16m1x5_t, vint16m1_t, int16, 5, _i16m1x5)
+DEF_RVV_TUPLE_TYPE (vuint16m1x5_t, 18, __rvv_uint16m1x5_t, vuint16m1_t, uint16, 5, _u16m1x5)
+DEF_RVV_TUPLE_TYPE (vint16m1x6_t, 17, __rvv_int16m1x6_t, vint16m1_t, int16, 6, _i16m1x6)
+DEF_RVV_TUPLE_TYPE (vuint16m1x6_t, 18, __rvv_uint16m1x6_t, vuint16m1_t, uint16, 6, _u16m1x6)
+DEF_RVV_TUPLE_TYPE (vint16m1x7_t, 17, __rvv_int16m1x7_t, vint16m1_t, int16, 7, _i16m1x7)
+DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18, __rvv_uint16m1x7_t, vuint16m1_t, uint16, 7, _u16m1x7)
+DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t, int16, 8, _i16m1x8)
+DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t, uint16, 8, _u16m1x8)
+/* Define tuple types for SEW = 16, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t, int16, 2, _i16m2x2)
+DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t, uint16, 2, _u16m2x2)
+DEF_RVV_TUPLE_TYPE (vint16m2x3_t, 17, __rvv_int16m2x3_t, vint16m2_t, int16, 3, _i16m2x3)
+DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18, __rvv_uint16m2x3_t, vuint16m2_t, uint16, 3, _u16m2x3)
+DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t, int16, 4, _i16m2x4)
+DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t, uint16, 4, _u16m2x4)
+/* Define tuple types for SEW = 16, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t, int16, 2, _i16m4x2)
+DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t, uint16, 2, _u16m4x2)
+/* Define tuple types for SEW = 32, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t, int32, 2, _i32mf2x2)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t, vuint32mf2_t, uint32, 2, _u32mf2x2)
+DEF_RVV_TUPLE_TYPE (vint32mf2x3_t, 18, __rvv_int32mf2x3_t, vint32mf2_t, int32, 3, _i32mf2x3)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x3_t, 19, __rvv_uint32mf2x3_t, vuint32mf2_t, uint32, 3, _u32mf2x3)
+DEF_RVV_TUPLE_TYPE (vint32mf2x4_t, 18, __rvv_int32mf2x4_t, vint32mf2_t, int32, 4, _i32mf2x4)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x4_t, 19, __rvv_uint32mf2x4_t, vuint32mf2_t, uint32, 4, _u32mf2x4)
+DEF_RVV_TUPLE_TYPE (vint32mf2x5_t, 18, __rvv_int32mf2x5_t, vint32mf2_t, int32, 5, _i32mf2x5)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x5_t, 19, __rvv_uint32mf2x5_t, vuint32mf2_t, uint32, 5, _u32mf2x5)
+DEF_RVV_TUPLE_TYPE (vint32mf2x6_t, 18, __rvv_int32mf2x6_t, vint32mf2_t, int32, 6, _i32mf2x6)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x6_t, 19, __rvv_uint32mf2x6_t, vuint32mf2_t, uint32, 6, _u32mf2x6)
+DEF_RVV_TUPLE_TYPE (vint32mf2x7_t, 18, __rvv_int32mf2x7_t, vint32mf2_t, int32, 7, _i32mf2x7)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19, __rvv_uint32mf2x7_t, vuint32mf2_t, uint32, 7, _u32mf2x7)
+DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t, int32, 8, _i32mf2x8)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t, vuint32mf2_t, uint32, 8, _u32mf2x8)
+/* Define tuple types for SEW = 32, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint32m1x2_t, 17, __rvv_int32m1x2_t, vint32m1_t, int32, 2, _i32m1x2)
+DEF_RVV_TUPLE_TYPE (vuint32m1x2_t, 18, __rvv_uint32m1x2_t, vuint32m1_t, uint32, 2, _u32m1x2)
+DEF_RVV_TUPLE_TYPE (vint32m1x3_t, 17, __rvv_int32m1x3_t, vint32m1_t, int32, 3, _i32m1x3)
+DEF_RVV_TUPLE_TYPE (vuint32m1x3_t, 18, __rvv_uint32m1x3_t, vuint32m1_t, uint32, 3, _u32m1x3)
+DEF_RVV_TUPLE_TYPE (vint32m1x4_t, 17, __rvv_int32m1x4_t, vint32m1_t, int32, 4, _i32m1x4)
+DEF_RVV_TUPLE_TYPE (vuint32m1x4_t, 18, __rvv_uint32m1x4_t, vuint32m1_t, uint32, 4, _u32m1x4)
+DEF_RVV_TUPLE_TYPE (vint32m1x5_t, 17, __rvv_int32m1x5_t, vint32m1_t, int32, 5, _i32m1x5)
+DEF_RVV_TUPLE_TYPE (vuint32m1x5_t, 18, __rvv_uint32m1x5_t, vuint32m1_t, uint32, 5, _u32m1x5)
+DEF_RVV_TUPLE_TYPE (vint32m1x6_t, 17, __rvv_int32m1x6_t, vint32m1_t, int32, 6, _i32m1x6)
+DEF_RVV_TUPLE_TYPE (vuint32m1x6_t, 18, __rvv_uint32m1x6_t, vuint32m1_t, uint32, 6, _u32m1x6)
+DEF_RVV_TUPLE_TYPE (vint32m1x7_t, 17, __rvv_int32m1x7_t, vint32m1_t, int32, 7, _i32m1x7)
+DEF_RVV_TUPLE_TYPE (vuint32m1x7_t, 18, __rvv_uint32m1x7_t, vuint32m1_t, uint32, 7, _u32m1x7)
+DEF_RVV_TUPLE_TYPE (vint32m1x8_t, 17, __rvv_int32m1x8_t, vint32m1_t, int32, 8, _i32m1x8)
+DEF_RVV_TUPLE_TYPE (vuint32m1x8_t, 18, __rvv_uint32m1x8_t, vuint32m1_t, uint32, 8, _u32m1x8)
+/* Define tuple types for SEW = 32, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint32m2x2_t, 17, __rvv_int32m2x2_t, vint32m2_t, int32, 2, _i32m2x2)
+DEF_RVV_TUPLE_TYPE (vuint32m2x2_t, 18, __rvv_uint32m2x2_t, vuint32m2_t, uint32, 2, _u32m2x2)
+DEF_RVV_TUPLE_TYPE (vint32m2x3_t, 17, __rvv_int32m2x3_t, vint32m2_t, int32, 3, _i32m2x3)
+DEF_RVV_TUPLE_TYPE (vuint32m2x3_t, 18, __rvv_uint32m2x3_t, vuint32m2_t, uint32, 3, _u32m2x3)
+DEF_RVV_TUPLE_TYPE (vint32m2x4_t, 17, __rvv_int32m2x4_t, vint32m2_t, int32, 4, _i32m2x4)
+DEF_RVV_TUPLE_TYPE (vuint32m2x4_t, 18, __rvv_uint32m2x4_t, vuint32m2_t, uint32, 4, _u32m2x4)
+/* Define tuple types for SEW = 32, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint32m4x2_t, 17, __rvv_int32m4x2_t, vint32m4_t, int32, 2, _i32m4x2)
+DEF_RVV_TUPLE_TYPE (vuint32m4x2_t, 18, __rvv_uint32m4x2_t, vuint32m4_t, uint32, 2, _u32m4x2)
+/* Define tuple types for SEW = 64, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint64m1x2_t, 17, __rvv_int64m1x2_t, vint64m1_t, int64, 2, _i64m1x2)
+DEF_RVV_TUPLE_TYPE (vuint64m1x2_t, 18, __rvv_uint64m1x2_t, vuint64m1_t, uint64, 2, _u64m1x2)
+DEF_RVV_TUPLE_TYPE (vint64m1x3_t, 17, __rvv_int64m1x3_t, vint64m1_t, int64, 3, _i64m1x3)
+DEF_RVV_TUPLE_TYPE (vuint64m1x3_t, 18, __rvv_uint64m1x3_t, vuint64m1_t, uint64, 3, _u64m1x3)
+DEF_RVV_TUPLE_TYPE (vint64m1x4_t, 17, __rvv_int64m1x4_t, vint64m1_t, int64, 4, _i64m1x4)
+DEF_RVV_TUPLE_TYPE (vuint64m1x4_t, 18, __rvv_uint64m1x4_t, vuint64m1_t, uint64, 4, _u64m1x4)
+DEF_RVV_TUPLE_TYPE (vint64m1x5_t, 17, __rvv_int64m1x5_t, vint64m1_t, int64, 5, _i64m1x5)
+DEF_RVV_TUPLE_TYPE (vuint64m1x5_t, 18, __rvv_uint64m1x5_t, vuint64m1_t, uint64, 5, _u64m1x5)
+DEF_RVV_TUPLE_TYPE (vint64m1x6_t, 17, __rvv_int64m1x6_t, vint64m1_t, int64, 6, _i64m1x6)
+DEF_RVV_TUPLE_TYPE (vuint64m1x6_t, 18, __rvv_uint64m1x6_t, vuint64m1_t, uint64, 6, _u64m1x6)
+DEF_RVV_TUPLE_TYPE (vint64m1x7_t, 17, __rvv_int64m1x7_t, vint64m1_t, int64, 7, _i64m1x7)
+DEF_RVV_TUPLE_TYPE (vuint64m1x7_t, 18, __rvv_uint64m1x7_t, vuint64m1_t, uint64, 7, _u64m1x7)
+DEF_RVV_TUPLE_TYPE (vint64m1x8_t, 17, __rvv_int64m1x8_t, vint64m1_t, int64, 8, _i64m1x8)
+DEF_RVV_TUPLE_TYPE (vuint64m1x8_t, 18, __rvv_uint64m1x8_t, vuint64m1_t, uint64, 8, _u64m1x8)
+/* Define tuple types for SEW = 64, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint64m2x2_t, 17, __rvv_int64m2x2_t, vint64m2_t, int64, 2, _i64m2x2)
+DEF_RVV_TUPLE_TYPE (vuint64m2x2_t, 18, __rvv_uint64m2x2_t, vuint64m2_t, uint64, 2, _u64m2x2)
+DEF_RVV_TUPLE_TYPE (vint64m2x3_t, 17, __rvv_int64m2x3_t, vint64m2_t, int64, 3, _i64m2x3)
+DEF_RVV_TUPLE_TYPE (vuint64m2x3_t, 18, __rvv_uint64m2x3_t, vuint64m2_t, uint64, 3, _u64m2x3)
+DEF_RVV_TUPLE_TYPE (vint64m2x4_t, 17, __rvv_int64m2x4_t, vint64m2_t, int64, 4, _i64m2x4)
+DEF_RVV_TUPLE_TYPE (vuint64m2x4_t, 18, __rvv_uint64m2x4_t, vuint64m2_t, uint64, 4, _u64m2x4)
+/* Define tuple types for SEW = 64, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint64m4x2_t, 17, __rvv_int64m4x2_t, vint64m4_t, int64, 2, _i64m4x2)
+DEF_RVV_TUPLE_TYPE (vuint64m4x2_t, 18, __rvv_uint64m4x2_t, vuint64m4_t, uint64, 2, _u64m4x2)
+
+/* Define floating-point tuple types.  */
+/* Define tuple types for SEW = 32, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x2_t, 20, __rvv_float32mf2x2_t, vfloat32mf2_t, float, 2, _f32mf2x2)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x3_t, 20, __rvv_float32mf2x3_t, vfloat32mf2_t, float, 3, _f32mf2x3)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x4_t, 20, __rvv_float32mf2x4_t, vfloat32mf2_t, float, 4, _f32mf2x4)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x5_t, 20, __rvv_float32mf2x5_t, vfloat32mf2_t, float, 5, _f32mf2x5)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x6_t, 20, __rvv_float32mf2x6_t, vfloat32mf2_t, float, 6, _f32mf2x6)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x7_t, 20, __rvv_float32mf2x7_t, vfloat32mf2_t, float, 7, _f32mf2x7)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x8_t, 20, __rvv_float32mf2x8_t, vfloat32mf2_t, float, 8, _f32mf2x8)
+/* Define tuple types for SEW = 32, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vfloat32m1x2_t, 19, __rvv_float32m1x2_t, vfloat32m1_t, double, 2, _f32m1x2)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x3_t, 19, __rvv_float32m1x3_t, vfloat32m1_t, double, 3, _f32m1x3)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x4_t, 19, __rvv_float32m1x4_t, vfloat32m1_t, double, 4, _f32m1x4)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x5_t, 19, __rvv_float32m1x5_t, vfloat32m1_t, double, 5, _f32m1x5)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x6_t, 19, __rvv_float32m1x6_t, vfloat32m1_t, double, 6, _f32m1x6)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x7_t, 19, __rvv_float32m1x7_t, vfloat32m1_t, double, 7, _f32m1x7)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x8_t, 19, __rvv_float32m1x8_t, vfloat32m1_t, double, 8, _f32m1x8)
+/* Define tuple types for SEW = 32, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vfloat32m2x2_t, 19, __rvv_float32m2x2_t, vfloat32m2_t, double, 2, _f32m2x2)
+DEF_RVV_TUPLE_TYPE (vfloat32m2x3_t, 19, __rvv_float32m2x3_t, vfloat32m2_t, double, 3, _f32m2x3)
+DEF_RVV_TUPLE_TYPE (vfloat32m2x4_t, 19, __rvv_float32m2x4_t, vfloat32m2_t, double, 4, _f32m2x4)
+/* Define tuple types for SEW = 32, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vfloat32m4x2_t, 19, __rvv_float32m4x2_t, vfloat32m4_t, double, 2, _f32m4x2)
+/* Define tuple types for SEW = 64, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vfloat64m1x2_t, 19, __rvv_float64m1x2_t, vfloat64m1_t, double, 2, _f64m1x2)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x3_t, 19, __rvv_float64m1x3_t, vfloat64m1_t, double, 3, _f64m1x3)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x4_t, 19, __rvv_float64m1x4_t, vfloat64m1_t, double, 4, _f64m1x4)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x5_t, 19, __rvv_float64m1x5_t, vfloat64m1_t, double, 5, _f64m1x5)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x6_t, 19, __rvv_float64m1x6_t, vfloat64m1_t, double, 6, _f64m1x6)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x7_t, 19, __rvv_float64m1x7_t, vfloat64m1_t, double, 7, _f64m1x7)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x8_t, 19, __rvv_float64m1x8_t, vfloat64m1_t, double, 8, _f64m1x8)
+/* Define tuple types for SEW = 64, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vfloat64m2x2_t, 19, __rvv_float64m2x2_t, vfloat64m2_t, double, 2, _f64m2x2)
+DEF_RVV_TUPLE_TYPE (vfloat64m2x3_t, 19, __rvv_float64m2x3_t, vfloat64m2_t, double, 3, _f64m2x3)
+DEF_RVV_TUPLE_TYPE (vfloat64m2x4_t, 19, __rvv_float64m2x4_t, vfloat64m2_t, double, 4, _f64m2x4)
+/* Define tuple types for SEW = 64, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vfloat64m4x2_t, 19, __rvv_float64m4x2_t, vfloat64m4_t, double, 2, _f64m4x2)
+
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
 DEF_RVV_OP_TYPE (v)
@@ -417,5 +653,6 @@ DEF_RVV_BASE_TYPE (size_ptr, build_pointer_type (size_type_node))
 #undef DEF_RVV_PRED_TYPE
 #undef DEF_RVV_OP_TYPE
 #undef DEF_RVV_TYPE
+#undef DEF_RVV_TUPLE_TYPE
 #undef DEF_RVV_BASE_TYPE
 #undef DEF_RVV_TYPE_INDEX
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 8ffb9d33e33..93261a72134 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum operand_type_index
 enum vector_type_index
 {
 #define DEF_RVV_TYPE(NAME, ABI_NAME, NCHARS, ARGS...) VECTOR_TYPE_##NAME,
+#define DEF_RVV_TUPLE_TYPE(NAME, ABI_NAME, NCHARS, ARGS...) VECTOR_TYPE_##NAME,
 #include "riscv-vector-builtins.def"
   NUM_VECTOR_TYPES,
   VECTOR_TYPE_INVALID = NUM_VECTOR_TYPES
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 8aae22d3259..4b1c32de0a3 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -84,6 +84,12 @@ TODO: FP16 vector needs support of 'zvfh', we don't support it yet.  */
 	      VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64,                      \
 	      VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)
 #endif
+#ifndef TUPLE_ENTRY
+#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
+		    RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64,                \
+		    RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128,               \
+		    RATIO_FOR_MIN_VLEN128)
+#endif
 
 /* Mask modes. Disable VNx64BImode when TARGET_MIN_VLEN == 32.  */
 ENTRY (VNx128BI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1)
@@ -157,4 +163,174 @@ ENTRY (VNx4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 3
 ENTRY (VNx2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
 ENTRY (VNx1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
 
+/* Enable or disable the tuple type. BASE_MODE is the base vector mode of the
+   tuple mode. For example, the BASE_MODE of VNx2x1SImode is VNx1SImode. ALL
+   tuple modes should always satisfy NF * BASE_MODE LMUL <= 8.  */
+
+/* Tuple modes for EEW = 8.  */
+TUPLE_ENTRY (VNx2x64QI, TARGET_MIN_VLEN >= 128, VNx64QI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 2)
+TUPLE_ENTRY (VNx2x32QI, TARGET_MIN_VLEN >= 64, VNx32QI, 2, LMUL_RESERVED, 0, LMUL_4, 2, LMUL_2, 4)
+TUPLE_ENTRY (VNx3x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
+TUPLE_ENTRY (VNx4x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
+TUPLE_ENTRY (VNx2x16QI, true, VNx16QI, 2, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
+TUPLE_ENTRY (VNx3x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 3, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
+TUPLE_ENTRY (VNx4x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 4, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
+TUPLE_ENTRY (VNx5x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx6x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx7x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx8x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx2x8QI, true, VNx8QI, 2, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx3x8QI, true, VNx8QI, 3, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx4x8QI, true, VNx8QI, 4, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx5x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 5, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx6x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 6, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx7x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 7, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx8x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 8, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx2x4QI, true, VNx4QI, 2, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx3x4QI, true, VNx4QI, 3, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx4x4QI, true, VNx4QI, 4, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx5x4QI, true, VNx4QI, 5, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx6x4QI, true, VNx4QI, 6, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx7x4QI, true, VNx4QI, 7, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx8x4QI, true, VNx4QI, 8, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx2x2QI, true, VNx2QI, 2, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx3x2QI, true, VNx2QI, 3, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx4x2QI, true, VNx2QI, 4, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx5x2QI, true, VNx2QI, 5, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx6x2QI, true, VNx2QI, 6, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx7x2QI, true, VNx2QI, 7, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx8x2QI, true, VNx2QI, 8, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx2x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 2, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 3, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 4, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 5, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 6, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 7, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 8, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+
+/* Tuple modes for EEW = 16.  */
+TUPLE_ENTRY (VNx2x32HI, TARGET_MIN_VLEN >= 128, VNx32HI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 4)
+TUPLE_ENTRY (VNx2x16HI, TARGET_MIN_VLEN >= 64, VNx16HI, 2, LMUL_RESERVED, 0, LMUL_4, 4, LMUL_2, 8)
+TUPLE_ENTRY (VNx3x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
+TUPLE_ENTRY (VNx4x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
+TUPLE_ENTRY (VNx2x8HI, true, VNx8HI, 2, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
+TUPLE_ENTRY (VNx3x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 3, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
+TUPLE_ENTRY (VNx4x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 4, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
+TUPLE_ENTRY (VNx5x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx6x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx7x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx8x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx2x4HI, true, VNx4HI, 2, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx3x4HI, true, VNx4HI, 3, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx4x4HI, true, VNx4HI, 4, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx5x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 5, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx6x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 6, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx7x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 7, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx8x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 8, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx2x2HI, true, VNx2HI, 2, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx3x2HI, true, VNx2HI, 3, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx4x2HI, true, VNx2HI, 4, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx5x2HI, true, VNx2HI, 5, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx6x2HI, true, VNx2HI, 6, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx7x2HI, true, VNx2HI, 7, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx8x2HI, true, VNx2HI, 8, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx2x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 2, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 3, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 4, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 5, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 6, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 7, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 8, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+
+/* Tuple modes for EEW = 32.  */
+TUPLE_ENTRY (VNx2x16SI, TARGET_MIN_VLEN >= 128, VNx16SI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
+TUPLE_ENTRY (VNx2x8SI, TARGET_MIN_VLEN >= 64, VNx8SI, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
+TUPLE_ENTRY (VNx3x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx4x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx2x4SI, true, VNx4SI, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx3x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx4x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx5x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx6x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx7x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx8x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx2x2SI, true, VNx2SI, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx3x2SI, true, VNx2SI, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx4x2SI, true, VNx2SI, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx5x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx6x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx7x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx8x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx2x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx2x16SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, VNx16SF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
+TUPLE_ENTRY (VNx2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx8SF, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
+TUPLE_ENTRY (VNx3x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx4x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx2x4SF, TARGET_VECTOR_ELEN_FP_32, VNx4SF, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx3x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx4x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx5x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx6x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx7x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx8x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx2x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx3x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx4x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx5x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx6x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx7x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx8x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx2x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+
+/* Tuple modes for EEW = 64.  */
+TUPLE_ENTRY (VNx2x8DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx8DI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
+TUPLE_ENTRY (VNx2x4DI, TARGET_VECTOR_ELEN_64, VNx4DI, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
+TUPLE_ENTRY (VNx3x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx4x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx2x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx3x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx4x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx5x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx6x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx7x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx8x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx2x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx2x8DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx8DF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
+TUPLE_ENTRY (VNx2x4DF, TARGET_VECTOR_ELEN_FP_64, VNx4DF, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
+TUPLE_ENTRY (VNx3x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx4x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx2x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx3x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx4x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx5x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx6x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx7x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx8x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx2x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+
 #undef ENTRY
+#undef TUPLE_ENTRY
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index a0b32a247b6..032383167a0 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -992,13 +992,39 @@ riscv_v_ext_vector_mode_p (machine_mode mode)
   return false;
 }
 
+/* Return true if mode is the RVV enabled tuple mode.  */
+
+bool
+riscv_v_ext_tuple_mode_p (machine_mode mode)
+{
+#define TUPLE_ENTRY(MODE, REQUIREMENT, ...)                                    \
+  case MODE##mode:                                                             \
+    return REQUIREMENT;
+  switch (mode)
+    {
+#include "riscv-vector-switch.def"
+    default:
+      return false;
+    }
+
+  return false;
+}
+
+/* Return true if it is either RVV vector mode or RVV tuple mode.  */
+
+static bool
+riscv_v_ext_mode_p (machine_mode mode)
+{
+  return riscv_v_ext_vector_mode_p (mode) || riscv_v_ext_tuple_mode_p (mode);
+}
+
 /* Call from ADJUST_NUNITS in riscv-modes.def. Return the correct
    NUNITS size for corresponding machine_mode.  */
 
 poly_int64
 riscv_v_adjust_nunits (machine_mode mode, int scale)
 {
-  if (riscv_v_ext_vector_mode_p (mode))
+  if (riscv_v_ext_mode_p (mode))
     return riscv_vector_chunks * scale;
   return scale;
 }
@@ -1056,7 +1082,7 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 
     case PLUS:
       /* RVV load/store disallow any offset.  */
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       info->type = ADDRESS_REG;
@@ -1067,7 +1093,7 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 
     case LO_SUM:
       /* RVV load/store disallow LO_SUM.  */
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       info->type = ADDRESS_LO_SUM;
@@ -1089,7 +1115,7 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 
     case CONST_INT:
       /* RVV load/store disallow CONST_INT.  */
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       /* Small-integer addresses don't occur very often, but they
@@ -2221,7 +2247,7 @@ riscv_immediate_operand_p (int code, HOST_WIDE_INT x)
 static int
 riscv_binary_cost (rtx x, int single_insns, int double_insns)
 {
-  if (!riscv_v_ext_vector_mode_p (GET_MODE (x))
+  if (!riscv_v_ext_mode_p (GET_MODE (x))
       && GET_MODE_SIZE (GET_MODE (x)).to_constant () == UNITS_PER_WORD * 2)
     return COSTS_N_INSNS (double_insns);
   return COSTS_N_INSNS (single_insns);
@@ -2271,7 +2297,7 @@ riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UN
 {
   /* TODO: We set RVV instruction cost as 1 by default.
      Cost Model need to be well analyzed and supported in the future. */
-  if (riscv_v_ext_vector_mode_p (mode))
+  if (riscv_v_ext_mode_p (mode))
     {
       *total = COSTS_N_INSNS (1);
       return true;
@@ -5885,7 +5911,7 @@ static bool
 riscv_secondary_memory_needed (machine_mode mode, reg_class_t class1,
 			       reg_class_t class2)
 {
-  return (!riscv_v_ext_vector_mode_p (mode)
+  return (!riscv_v_ext_mode_p (mode)
 	  && GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD
 	  && (class1 == FP_REGS) != (class2 == FP_REGS)
 	  && !TARGET_XTHEADFMV);
@@ -5919,6 +5945,22 @@ riscv_hard_regno_nregs (unsigned int regno, machine_mode mode)
       return exact_div (GET_MODE_SIZE (mode), UNITS_PER_V_REG).to_constant ();
     }
 
+  /* For tuple modes, the number of register = NF * LMUL.  */
+  if (riscv_v_ext_tuple_mode_p (mode))
+    {
+      unsigned int nf = riscv_vector::get_nf (mode);
+      machine_mode subpart_mode = riscv_vector::get_subpart_mode (mode);
+      poly_int64 size = GET_MODE_SIZE (subpart_mode);
+      gcc_assert (known_eq (size * nf, GET_MODE_SIZE (mode)));
+      if (maybe_lt (size, UNITS_PER_V_REG))
+	return nf;
+      else
+	{
+	  unsigned int lmul = exact_div (size, UNITS_PER_V_REG).to_constant ();
+	  return nf * lmul;
+	}
+    }
+
   /* mode for VL or VTYPE are just a marker, not holding value,
      so it always consume one register.  */
   if (regno == VTYPE_REGNUM || regno == VL_REGNUM)
@@ -5944,7 +5986,7 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
 
   if (GP_REG_P (regno))
     {
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       if (!GP_REG_P (regno + nregs - 1))
@@ -5952,7 +5994,7 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
     }
   else if (FP_REG_P (regno))
     {
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       if (!FP_REG_P (regno + nregs - 1))
@@ -5971,7 +6013,7 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
     }
   else if (V_REG_P (regno))
     {
-      if (!riscv_v_ext_vector_mode_p (mode))
+      if (!riscv_v_ext_mode_p (mode))
 	return false;
 
       if (!V_REG_P (regno + nregs - 1))
@@ -5980,8 +6022,12 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
       /* 3.3.2. LMUL = 2,4,8, register numbers should be multiple of 2,4,8.
 	 but for mask vector register, register numbers can be any number. */
       int lmul = 1;
-      if (known_gt (GET_MODE_SIZE (mode), UNITS_PER_V_REG))
-	lmul = exact_div (GET_MODE_SIZE (mode), UNITS_PER_V_REG).to_constant ();
+      machine_mode rvv_mode = mode;
+      if (riscv_v_ext_tuple_mode_p (rvv_mode))
+	rvv_mode = riscv_vector::get_subpart_mode (rvv_mode);
+      poly_int64 size = GET_MODE_SIZE (rvv_mode);
+      if (known_gt (size, UNITS_PER_V_REG))
+	lmul = exact_div (size, UNITS_PER_V_REG).to_constant ();
       if (lmul != 1)
 	return ((regno % lmul) == 0);
     }
@@ -7004,7 +7050,7 @@ static bool
 riscv_vector_mode_supported_p (machine_mode mode)
 {
   if (TARGET_VECTOR)
-    return riscv_v_ext_vector_mode_p (mode);
+    return riscv_v_ext_mode_p (mode);
 
   return false;
 }
@@ -7046,8 +7092,17 @@ riscv_regmode_natural_size (machine_mode mode)
      anything smaller than that.  */
   /* ??? For now, only do this for variable-width RVV registers.
      Doing it for constant-sized registers breaks lower-subreg.c.  */
-  if (!riscv_vector_chunks.is_constant () && riscv_v_ext_vector_mode_p (mode))
-    return BYTES_PER_RISCV_VECTOR;
+  if (!riscv_vector_chunks.is_constant () && riscv_v_ext_mode_p (mode))
+    {
+      if (riscv_v_ext_tuple_mode_p (mode))
+	{
+	  poly_uint64 size
+	    = GET_MODE_SIZE (riscv_vector::get_subpart_mode (mode));
+	  if (known_lt (size, BYTES_PER_RISCV_VECTOR))
+	    return size;
+	}
+      return BYTES_PER_RISCV_VECTOR;
+    }
   return UNITS_PER_WORD;
 }
 
@@ -7147,6 +7202,19 @@ riscv_zero_call_used_regs (HARD_REG_SET need_zeroed_hardregs)
 							& ~zeroed_hardregs);
 }
 
+/* Implement target hook TARGET_ARRAY_MODE.  */
+
+static opt_machine_mode
+riscv_array_mode (machine_mode mode, unsigned HOST_WIDE_INT nelems)
+{
+  machine_mode vmode;
+  if (TARGET_VECTOR
+      && riscv_vector::get_tuple_mode (mode, nelems).exists (&vmode))
+    return vmode;
+
+  return opt_machine_mode ();
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -7401,6 +7469,9 @@ riscv_zero_call_used_regs (HARD_REG_SET need_zeroed_hardregs)
 #undef TARGET_ZERO_CALL_USED_REGS
 #define TARGET_ZERO_CALL_USED_REGS riscv_zero_call_used_regs
 
+#undef TARGET_ARRAY_MODE
+#define TARGET_ARRAY_MODE riscv_array_mode
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 1fb29da8a0b..e0d1a3315e0 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -169,7 +169,32 @@
   VNx1SI,VNx2SI,VNx4SI,VNx8SI,VNx16SI,VNx32SI,
   VNx1DI,VNx2DI,VNx4DI,VNx8DI,VNx16DI,
   VNx1SF,VNx2SF,VNx4SF,VNx8SF,VNx16SF,VNx32SF,
-  VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF"
+  VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF,
+  VNx2x64QI,VNx2x32QI,VNx3x32QI,VNx4x32QI,
+  VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI,
+  VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI,
+  VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI,
+  VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI,
+  VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI,
+  VNx2x32HI,VNx2x16HI,VNx3x16HI,VNx4x16HI,
+  VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI,
+  VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI,
+  VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI,
+  VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI,
+  VNx2x16SI,VNx2x8SI,VNx3x8SI,VNx4x8SI,
+  VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,
+  VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,
+  VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,
+  VNx2x16SF,VNx2x8SF,VNx3x8SF,VNx4x8SF,
+  VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF,
+  VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF,
+  VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF,
+  VNx2x8DI,VNx2x4DI,VNx3x4DI,VNx4x4DI,
+  VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,
+  VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,
+  VNx2x8DF,VNx2x4DF,VNx3x4DF,VNx4x4DF,
+  VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF,
+  VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF"
   (const_string "unknown"))
 
 ;; True if the main data type is twice the size of a word.
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 3c6575208be..b42afb0ff1a 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -487,6 +487,166 @@
   (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
 ])
 
+(define_mode_iterator VT [
+  (VNx2x64QI "TARGET_MIN_VLEN >= 128")
+  (VNx2x32QI "TARGET_MIN_VLEN >= 64")
+  (VNx3x32QI "TARGET_MIN_VLEN >= 128")
+  (VNx4x32QI "TARGET_MIN_VLEN >= 128")
+  VNx2x16QI
+  (VNx3x16QI "TARGET_MIN_VLEN >= 64")
+  (VNx4x16QI "TARGET_MIN_VLEN >= 64")
+  (VNx5x16QI "TARGET_MIN_VLEN >= 128")
+  (VNx6x16QI "TARGET_MIN_VLEN >= 128")
+  (VNx7x16QI "TARGET_MIN_VLEN >= 128")
+  (VNx8x16QI "TARGET_MIN_VLEN >= 128")
+  VNx2x8QI
+  VNx3x8QI
+  VNx4x8QI
+  (VNx5x8QI "TARGET_MIN_VLEN >= 64")
+  (VNx6x8QI "TARGET_MIN_VLEN >= 64")
+  (VNx7x8QI "TARGET_MIN_VLEN >= 64")
+  (VNx8x8QI "TARGET_MIN_VLEN >= 64")
+  VNx2x4QI
+  VNx3x4QI
+  VNx4x4QI
+  VNx5x4QI
+  VNx6x4QI
+  VNx7x4QI
+  VNx8x4QI
+  VNx2x2QI
+  VNx3x2QI
+  VNx4x2QI
+  VNx5x2QI
+  VNx6x2QI
+  VNx7x2QI
+  VNx8x2QI
+  (VNx2x1QI "TARGET_MIN_VLEN < 128")
+  (VNx3x1QI "TARGET_MIN_VLEN < 128")
+  (VNx4x1QI "TARGET_MIN_VLEN < 128")
+  (VNx5x1QI "TARGET_MIN_VLEN < 128")
+  (VNx6x1QI "TARGET_MIN_VLEN < 128")
+  (VNx7x1QI "TARGET_MIN_VLEN < 128")
+  (VNx8x1QI "TARGET_MIN_VLEN < 128")
+  (VNx2x32HI "TARGET_MIN_VLEN >= 128")
+  (VNx2x16HI "TARGET_MIN_VLEN >= 64")
+  (VNx3x16HI "TARGET_MIN_VLEN >= 128")
+  (VNx4x16HI "TARGET_MIN_VLEN >= 128")
+  VNx2x8HI
+  (VNx3x8HI "TARGET_MIN_VLEN >= 64")
+  (VNx4x8HI "TARGET_MIN_VLEN >= 64")
+  (VNx5x8HI "TARGET_MIN_VLEN >= 128")
+  (VNx6x8HI "TARGET_MIN_VLEN >= 128")
+  (VNx7x8HI "TARGET_MIN_VLEN >= 128")
+  (VNx8x8HI "TARGET_MIN_VLEN >= 128")
+  VNx2x4HI
+  VNx3x4HI
+  VNx4x4HI
+  (VNx5x4HI "TARGET_MIN_VLEN >= 64")
+  (VNx6x4HI "TARGET_MIN_VLEN >= 64")
+  (VNx7x4HI "TARGET_MIN_VLEN >= 64")
+  (VNx8x4HI "TARGET_MIN_VLEN >= 64")
+  VNx2x2HI
+  VNx3x2HI
+  VNx4x2HI
+  VNx5x2HI
+  VNx6x2HI
+  VNx7x2HI
+  VNx8x2HI
+  (VNx2x1HI "TARGET_MIN_VLEN < 128")
+  (VNx3x1HI "TARGET_MIN_VLEN < 128")
+  (VNx4x1HI "TARGET_MIN_VLEN < 128")
+  (VNx5x1HI "TARGET_MIN_VLEN < 128")
+  (VNx6x1HI "TARGET_MIN_VLEN < 128")
+  (VNx7x1HI "TARGET_MIN_VLEN < 128")
+  (VNx8x1HI "TARGET_MIN_VLEN < 128")
+  (VNx2x16SI "TARGET_MIN_VLEN >= 128")
+  (VNx2x8SI "TARGET_MIN_VLEN >= 64")
+  (VNx3x8SI "TARGET_MIN_VLEN >= 128")
+  (VNx4x8SI "TARGET_MIN_VLEN >= 128")
+  VNx2x4SI
+  (VNx3x4SI "TARGET_MIN_VLEN >= 64")
+  (VNx4x4SI "TARGET_MIN_VLEN >= 64")
+  (VNx5x4SI "TARGET_MIN_VLEN >= 128")
+  (VNx6x4SI "TARGET_MIN_VLEN >= 128")
+  (VNx7x4SI "TARGET_MIN_VLEN >= 128")
+  (VNx8x4SI "TARGET_MIN_VLEN >= 128")
+  VNx2x2SI
+  VNx3x2SI
+  VNx4x2SI
+  (VNx5x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx6x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx7x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx8x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx2x1SI "TARGET_MIN_VLEN < 128")
+  (VNx3x1SI "TARGET_MIN_VLEN < 128")
+  (VNx4x1SI "TARGET_MIN_VLEN < 128")
+  (VNx5x1SI "TARGET_MIN_VLEN < 128")
+  (VNx6x1SI "TARGET_MIN_VLEN < 128")
+  (VNx7x1SI "TARGET_MIN_VLEN < 128")
+  (VNx8x1SI "TARGET_MIN_VLEN < 128")
+  (VNx2x8DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x4DI "TARGET_VECTOR_ELEN_64")
+  (VNx3x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx4x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x2DI "TARGET_VECTOR_ELEN_64")
+  (VNx3x2DI "TARGET_VECTOR_ELEN_64")
+  (VNx4x2DI "TARGET_VECTOR_ELEN_64")
+  (VNx5x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx6x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx7x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx8x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx3x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx4x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx5x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx6x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx7x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx8x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx2x16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (VNx2x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx3x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx4x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx2x4SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx3x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx4x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx5x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx6x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx7x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx8x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx2x2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx3x2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx4x2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx5x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx6x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx7x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx8x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx2x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx3x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx4x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx5x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx6x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx7x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx8x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx2x8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x4DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx3x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx4x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx3x2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx4x2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx5x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx6x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx7x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx8x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx3x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx4x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx5x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx6x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx7x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx8x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+])
+
 (define_mode_attr VLMULX2 [
   (VNx1QI "VNx2QI") (VNx2QI "VNx4QI") (VNx4QI "VNx8QI") (VNx8QI "VNx16QI") (VNx16QI "VNx32QI") (VNx32QI "VNx64QI") (VNx64QI "VNx128QI")
   (VNx1HI "VNx2HI") (VNx2HI "VNx4HI") (VNx4HI "VNx8HI") (VNx8HI "VNx16HI") (VNx16HI "VNx32HI") (VNx32HI "VNx64HI")
@@ -563,6 +723,32 @@
   (VNx1DI "VNx1BI") (VNx2DI "VNx2BI") (VNx4DI "VNx4BI") (VNx8DI "VNx8BI") (VNx16DI "VNx16BI")
   (VNx1SF "VNx1BI") (VNx2SF "VNx2BI") (VNx4SF "VNx4BI") (VNx8SF "VNx8BI") (VNx16SF "VNx16BI") (VNx32SF "VNx32BI")
   (VNx1DF "VNx1BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI") (VNx16DF "VNx16BI")
+  (VNx2x64QI "VNx64BI") (VNx2x32QI "VNx32BI") (VNx3x32QI "VNx32BI") (VNx4x32QI "VNx32BI")
+  (VNx2x16QI "VNx16BI") (VNx3x16QI "VNx16BI") (VNx4x16QI "VNx16BI") (VNx5x16QI "VNx16BI") (VNx6x16QI "VNx16BI") (VNx7x16QI "VNx16BI") (VNx8x16QI "VNx16BI")
+  (VNx2x8QI "VNx8BI") (VNx3x8QI "VNx8BI") (VNx4x8QI "VNx8BI") (VNx5x8QI "VNx8BI") (VNx6x8QI "VNx8BI") (VNx7x8QI "VNx8BI") (VNx8x8QI "VNx8BI")
+  (VNx2x4QI "VNx4BI") (VNx3x4QI "VNx4BI") (VNx4x4QI "VNx4BI") (VNx5x4QI "VNx4BI") (VNx6x4QI "VNx4BI") (VNx7x4QI "VNx4BI") (VNx8x4QI "VNx4BI")
+  (VNx2x2QI "VNx2BI") (VNx3x2QI "VNx2BI") (VNx4x2QI "VNx2BI") (VNx5x2QI "VNx2BI") (VNx6x2QI "VNx2BI") (VNx7x2QI "VNx2BI") (VNx8x2QI "VNx2BI")
+  (VNx2x1QI "VNx1BI") (VNx3x1QI "VNx1BI") (VNx4x1QI "VNx1BI") (VNx5x1QI "VNx1BI") (VNx6x1QI "VNx1BI") (VNx7x1QI "VNx1BI") (VNx8x1QI "VNx1BI")
+  (VNx2x32HI "VNx32BI") (VNx2x16HI "VNx16BI") (VNx3x16HI "VNx16BI") (VNx4x16HI "VNx16BI")
+  (VNx2x8HI "VNx8BI") (VNx3x8HI "VNx8BI") (VNx4x8HI "VNx8BI") (VNx5x8HI "VNx8BI") (VNx6x8HI "VNx8BI") (VNx7x8HI "VNx8BI") (VNx8x8HI "VNx8BI")
+  (VNx2x4HI "VNx4BI") (VNx3x4HI "VNx4BI") (VNx4x4HI "VNx4BI") (VNx5x4HI "VNx4BI") (VNx6x4HI "VNx4BI") (VNx7x4HI "VNx4BI") (VNx8x4HI "VNx4BI")
+  (VNx2x2HI "VNx2BI") (VNx3x2HI "VNx2BI") (VNx4x2HI "VNx2BI") (VNx5x2HI "VNx2BI") (VNx6x2HI "VNx2BI") (VNx7x2HI "VNx2BI") (VNx8x2HI "VNx2BI")
+  (VNx2x1HI "VNx1BI") (VNx3x1HI "VNx1BI") (VNx4x1HI "VNx1BI") (VNx5x1HI "VNx1BI") (VNx6x1HI "VNx1BI") (VNx7x1HI "VNx1BI") (VNx8x1HI "VNx1BI")
+  (VNx2x16SI "VNx16BI") (VNx2x8SI "VNx8BI") (VNx3x8SI "VNx8BI") (VNx4x8SI "VNx8BI")
+  (VNx2x4SI "VNx4BI") (VNx3x4SI "VNx4BI") (VNx4x4SI "VNx4BI") (VNx5x4SI "VNx4BI") (VNx6x4SI "VNx4BI") (VNx7x4SI "VNx4BI") (VNx8x4SI "VNx4BI")
+  (VNx2x2SI "VNx2BI") (VNx3x2SI "VNx2BI") (VNx4x2SI "VNx2BI") (VNx5x2SI "VNx2BI") (VNx6x2SI "VNx2BI") (VNx7x2SI "VNx2BI") (VNx8x2SI "VNx2BI")
+  (VNx2x1SI "VNx1BI") (VNx3x1SI "VNx1BI") (VNx4x1SI "VNx1BI") (VNx5x1SI "VNx1BI") (VNx6x1SI "VNx1BI") (VNx7x1SI "VNx1BI") (VNx8x1SI "VNx1BI")
+  (VNx2x8DI "VNx8BI") (VNx2x4DI "VNx4BI") (VNx3x4DI "VNx4BI") (VNx4x4DI "VNx4BI")
+  (VNx2x2DI "VNx2BI") (VNx3x2DI "VNx2BI") (VNx4x2DI "VNx2BI") (VNx5x2DI "VNx2BI") (VNx6x2DI "VNx2BI") (VNx7x2DI "VNx2BI") (VNx8x2DI "VNx2BI")
+  (VNx2x1DI "VNx1BI") (VNx3x1DI "VNx1BI") (VNx4x1DI "VNx1BI") (VNx5x1DI "VNx1BI") (VNx6x1DI "VNx1BI") (VNx7x1DI "VNx1BI") (VNx8x1DI "VNx1BI")
+  (VNx2x16SF "VNx16BI") (VNx2x8SF "VNx8BI") (VNx3x8SF "VNx8BI") (VNx4x8SF "VNx8BI")
+  (VNx2x4SF "VNx4BI") (VNx3x4SF "VNx4BI") (VNx4x4SF "VNx4BI") (VNx5x4SF "VNx4BI") (VNx6x4SF "VNx4BI") (VNx7x4SF "VNx4BI") (VNx8x4SF "VNx4BI")
+  (VNx2x2SF "VNx2BI") (VNx3x2SF "VNx2BI") (VNx4x2SF "VNx2BI") (VNx5x2SF "VNx2BI") (VNx6x2SF "VNx2BI") (VNx7x2SF "VNx2BI") (VNx8x2SF "VNx2BI")
+  (VNx2x1SF "VNx1BI") (VNx3x1SF "VNx1BI") (VNx4x1SF "VNx1BI") (VNx5x1SF "VNx1BI") (VNx6x1SF "VNx1BI") (VNx7x1SF "VNx1BI") (VNx8x1SF "VNx1BI")
+  (VNx2x8DF "VNx8BI")
+  (VNx2x4DF "VNx4BI") (VNx3x4DF "VNx4BI") (VNx4x4DF "VNx4BI")
+  (VNx2x2DF "VNx2BI") (VNx3x2DF "VNx2BI") (VNx4x2DF "VNx2BI") (VNx5x2DF "VNx2BI") (VNx6x2DF "VNx2BI") (VNx7x2DF "VNx2BI") (VNx8x2DF "VNx2BI")
+  (VNx2x1DF "VNx1BI") (VNx3x1DF "VNx1BI") (VNx4x1DF "VNx1BI") (VNx5x1DF "VNx1BI") (VNx6x1DF "VNx1BI") (VNx7x1DF "VNx1BI") (VNx8x1DF "VNx1BI")
 ])
 
 (define_mode_attr vm [
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 0fda11ed67d..955c2971b60 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -708,6 +708,50 @@
   DONE;
 })
 
+;; Define tuple modes data movement.
+;; operands[2] is used to save the offset of each subpart.
+;; operands[3] is used to calculate the address for each subpart.
+;; operands[4] is VL of vsevli instruction.
+(define_expand "mov<mode>"
+  [(parallel [(set (match_operand:VT 0 "reg_or_mem_operand")
+                   (match_operand:VT 1 "general_operand"))
+     (clobber (match_dup 2))
+     (clobber (match_dup 3))
+     (clobber (match_dup 4))])]
+  "TARGET_VECTOR"
+  {
+    /* Need to force register if mem <- !reg.  */
+    if (MEM_P (operands[0]) && !REG_P (operands[1]))
+      operands[1] = force_reg (<MODE>mode, operands[1]);
+
+    if (GET_CODE (operands[1]) == CONST_VECTOR)
+      {
+        riscv_vector::expand_tuple_move (<VM>mode, operands);
+        DONE;
+      }
+
+    operands[2] = gen_rtx_SCRATCH (Pmode);
+    operands[3] = gen_rtx_SCRATCH (Pmode);
+    operands[4] = gen_rtx_SCRATCH (Pmode);
+  })
+
+(define_insn_and_split "*mov<VT:mode>_<P:mode>"
+  [(set (match_operand:VT 0 "reg_or_mem_operand" "=vr,vr, m")
+        (match_operand:VT 1 "reg_or_mem_operand" " vr, m,vr"))
+   (clobber (match_scratch:P 2 "=X,&r,&r"))
+   (clobber (match_scratch:P 3 "=X,&r,&r"))
+   (clobber (match_scratch:P 4 "=X,&r,&r"))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+  {
+    riscv_vector::expand_tuple_move (<VM>mode, operands);
+    DONE;
+  }
+  [(set_attr "type" "vmov,vlde,vste")
+   (set_attr "mode" "<VT:MODE>")])
+
 ;; -----------------------------------------------------------------
 ;; ---- Duplicate Operations
 ;; -----------------------------------------------------------------
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-10.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-10.c
new file mode 100644
index 00000000000..62dd7c8663c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-10.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve64x -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;}
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;}
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;}
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;}
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;}
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;}
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;}
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;}
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;}
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;}
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;}
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;}
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;}
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;}
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;}
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;}
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;}
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;}
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;}
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;}
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;}
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;}
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;}
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;} /* { dg-error {unknown type name '__rvv_float32m1x2_t'} } */
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;} /* { dg-error {unknown type name '__rvv_float32m1x3_t'} } */
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;} /* { dg-error {unknown type name '__rvv_float32m1x4_t'} } */
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;} /* { dg-error {unknown type name '__rvv_float32m1x5_t'} } */
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;} /* { dg-error {unknown type name '__rvv_float32m1x6_t'} } */
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;} /* { dg-error {unknown type name '__rvv_float32m1x7_t'} } */
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;} /* { dg-error {unknown type name '__rvv_float32m1x8_t'} } */
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;} /* { dg-error {unknown type name '__rvv_float32m2x2_t'} } */
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;} /* { dg-error {unknown type name '__rvv_float32m2x3_t'} } */
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;} /* { dg-error {unknown type name '__rvv_float32m2x4_t'} } */
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;} /* { dg-error {unknown type name '__rvv_float32m4x2_t'} } */
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-11.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-11.c
new file mode 100644
index 00000000000..a524b415880
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-11.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve64f -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;}
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;}
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;}
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;}
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;}
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;}
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;}
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;}
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;}
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;}
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;}
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;}
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;}
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;}
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;}
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;}
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;}
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;}
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;}
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;}
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;}
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;}
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;}
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;}
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;}
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;}
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;}
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;}
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;}
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;}
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;}
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;}
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;}
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;}
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;}
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;}
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;}
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;}
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-12.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-12.c
new file mode 100644
index 00000000000..925aa9eccc3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-12.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve64d -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;}
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;}
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;}
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;}
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;}
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;}
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;}
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;}
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;}
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;}
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;}
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;}
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;}
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;}
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;}
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;}
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;}
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;}
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;}
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;}
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;}
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;}
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;}
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;}
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;}
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;}
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;}
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;}
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;}
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;}
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;}
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;}
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;}
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;}
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;}
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;}
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;}
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;}
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;}
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;}
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;}
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;}
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;}
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;}
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;}
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;}
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;}
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;}
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-13.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-13.c
new file mode 100644
index 00000000000..13940e56358
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-13.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32x -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x2_t'} } */
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x2_t'} } */
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x3_t'} } */
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x3_t'} } */
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x4_t'} } */
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x4_t'} } */
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x5_t'} } */
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x5_t'} } */
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x6_t'} } */
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x6_t'} } */
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x7_t'} } */
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x7_t'} } */
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x8_t'} } */
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x8_t'} } */
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x2_t'} } */
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x2_t'} } */
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x3_t'} } */
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x3_t'} } */
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x4_t'} } */
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x4_t'} } */
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x5_t'} } */
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x5_t'} } */
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x6_t'} } */
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x6_t'} } */
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x7_t'} } */
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x7_t'} } */
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x8_t'} } */
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x8_t'} } */
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x2_t'} } */
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x2_t'} } */
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x3_t'} } */
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x3_t'} } */
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x4_t'} } */
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x4_t'} } */
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x5_t'} } */
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x5_t'} } */
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x6_t'} } */
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x6_t'} } */
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x7_t'} } */
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x7_t'} } */
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x8_t'} } */
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x8_t'} } */
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;} /* { dg-error {unknown type name '__rvv_int64m1x2_t'} } */
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x2_t'} } */
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;} /* { dg-error {unknown type name '__rvv_int64m1x3_t'} } */
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x3_t'} } */
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;} /* { dg-error {unknown type name '__rvv_int64m1x4_t'} } */
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x4_t'} } */
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;} /* { dg-error {unknown type name '__rvv_int64m1x5_t'} } */
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x5_t'} } */
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;} /* { dg-error {unknown type name '__rvv_int64m1x6_t'} } */
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x6_t'} } */
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;} /* { dg-error {unknown type name '__rvv_int64m1x7_t'} } */
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x7_t'} } */
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;} /* { dg-error {unknown type name '__rvv_int64m1x8_t'} } */
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x8_t'} } */
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;} /* { dg-error {unknown type name '__rvv_int64m2x2_t'} } */
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x2_t'} } */
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;} /* { dg-error {unknown type name '__rvv_int64m2x3_t'} } */
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x3_t'} } */
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;} /* { dg-error {unknown type name '__rvv_int64m2x4_t'} } */
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x4_t'} } */
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;} /* { dg-error {unknown type name '__rvv_int64m4x2_t'} } */
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m4x2_t'} } */
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;} /* { dg-error {unknown type name '__rvv_float32m1x2_t'} } */
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;} /* { dg-error {unknown type name '__rvv_float32m1x3_t'} } */
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;} /* { dg-error {unknown type name '__rvv_float32m1x4_t'} } */
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;} /* { dg-error {unknown type name '__rvv_float32m1x5_t'} } */
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;} /* { dg-error {unknown type name '__rvv_float32m1x6_t'} } */
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;} /* { dg-error {unknown type name '__rvv_float32m1x7_t'} } */
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;} /* { dg-error {unknown type name '__rvv_float32m1x8_t'} } */
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;} /* { dg-error {unknown type name '__rvv_float32m2x2_t'} } */
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;} /* { dg-error {unknown type name '__rvv_float32m2x3_t'} } */
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;} /* { dg-error {unknown type name '__rvv_float32m2x4_t'} } */
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;} /* { dg-error {unknown type name '__rvv_float32m4x2_t'} } */
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c
new file mode 100644
index 00000000000..163152ae923
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-14.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32x_zvl64b -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} 
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;} /* { dg-error {unknown type name '__rvv_int64m1x2_t'} } */
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x2_t'} } */
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;} /* { dg-error {unknown type name '__rvv_int64m1x3_t'} } */
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x3_t'} } */
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;} /* { dg-error {unknown type name '__rvv_int64m1x4_t'} } */
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x4_t'} } */
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;} /* { dg-error {unknown type name '__rvv_int64m1x5_t'} } */
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x5_t'} } */
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;} /* { dg-error {unknown type name '__rvv_int64m1x6_t'} } */
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x6_t'} } */
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;} /* { dg-error {unknown type name '__rvv_int64m1x7_t'} } */
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x7_t'} } */
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;} /* { dg-error {unknown type name '__rvv_int64m1x8_t'} } */
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x8_t'} } */
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;} /* { dg-error {unknown type name '__rvv_int64m2x2_t'} } */
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x2_t'} } */
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;} /* { dg-error {unknown type name '__rvv_int64m2x3_t'} } */
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x3_t'} } */
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;} /* { dg-error {unknown type name '__rvv_int64m2x4_t'} } */
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x4_t'} } */
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;} /* { dg-error {unknown type name '__rvv_int64m4x2_t'} } */
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m4x2_t'} } */
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;} /* { dg-error {unknown type name '__rvv_float32m1x2_t'} } */
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;} /* { dg-error {unknown type name '__rvv_float32m1x3_t'} } */
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;} /* { dg-error {unknown type name '__rvv_float32m1x4_t'} } */
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;} /* { dg-error {unknown type name '__rvv_float32m1x5_t'} } */
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;} /* { dg-error {unknown type name '__rvv_float32m1x6_t'} } */
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;} /* { dg-error {unknown type name '__rvv_float32m1x7_t'} } */
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;} /* { dg-error {unknown type name '__rvv_float32m1x8_t'} } */
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;} /* { dg-error {unknown type name '__rvv_float32m2x2_t'} } */
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;} /* { dg-error {unknown type name '__rvv_float32m2x3_t'} } */
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;} /* { dg-error {unknown type name '__rvv_float32m2x4_t'} } */
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;} /* { dg-error {unknown type name '__rvv_float32m4x2_t'} } */
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-15.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-15.c
new file mode 100644
index 00000000000..b52d86c6a36
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-15.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32f -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x2_t'} } */
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x2_t'} } */
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x3_t'} } */
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x3_t'} } */
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x4_t'} } */
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x4_t'} } */
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x5_t'} } */
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x5_t'} } */
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x6_t'} } */
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x6_t'} } */
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x7_t'} } */
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x7_t'} } */
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x8_t'} } */
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x8_t'} } */
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x2_t'} } */
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x2_t'} } */
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x3_t'} } */
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x3_t'} } */
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x4_t'} } */
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x4_t'} } */
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x5_t'} } */
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x5_t'} } */
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x6_t'} } */
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x6_t'} } */
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x7_t'} } */
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x7_t'} } */
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x8_t'} } */
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x8_t'} } */
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x2_t'} } */
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x2_t'} } */
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x3_t'} } */
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x3_t'} } */
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x4_t'} } */
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x4_t'} } */
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x5_t'} } */
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x5_t'} } */
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x6_t'} } */
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x6_t'} } */
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x7_t'} } */
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x7_t'} } */
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x8_t'} } */
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x8_t'} } */
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;} /* { dg-error {unknown type name '__rvv_int64m1x2_t'} } */
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x2_t'} } */
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;} /* { dg-error {unknown type name '__rvv_int64m1x3_t'} } */
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x3_t'} } */
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;} /* { dg-error {unknown type name '__rvv_int64m1x4_t'} } */
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x4_t'} } */
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;} /* { dg-error {unknown type name '__rvv_int64m1x5_t'} } */
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x5_t'} } */
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;} /* { dg-error {unknown type name '__rvv_int64m1x6_t'} } */
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x6_t'} } */
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;} /* { dg-error {unknown type name '__rvv_int64m1x7_t'} } */
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x7_t'} } */
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;} /* { dg-error {unknown type name '__rvv_int64m1x8_t'} } */
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x8_t'} } */
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;} /* { dg-error {unknown type name '__rvv_int64m2x2_t'} } */
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x2_t'} } */
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;} /* { dg-error {unknown type name '__rvv_int64m2x3_t'} } */
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x3_t'} } */
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;} /* { dg-error {unknown type name '__rvv_int64m2x4_t'} } */
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x4_t'} } */
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;} /* { dg-error {unknown type name '__rvv_int64m4x2_t'} } */
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m4x2_t'} } */
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;}
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;}
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;}
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;}
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;}
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;}
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;}
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;}
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c
new file mode 100644
index 00000000000..be2cbb5efd7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-16.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32f_zvl64b -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} 
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;} /* { dg-error {unknown type name '__rvv_int64m1x2_t'} } */
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x2_t'} } */
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;} /* { dg-error {unknown type name '__rvv_int64m1x3_t'} } */
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x3_t'} } */
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;} /* { dg-error {unknown type name '__rvv_int64m1x4_t'} } */
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x4_t'} } */
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;} /* { dg-error {unknown type name '__rvv_int64m1x5_t'} } */
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x5_t'} } */
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;} /* { dg-error {unknown type name '__rvv_int64m1x6_t'} } */
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x6_t'} } */
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;} /* { dg-error {unknown type name '__rvv_int64m1x7_t'} } */
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x7_t'} } */
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;} /* { dg-error {unknown type name '__rvv_int64m1x8_t'} } */
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x8_t'} } */
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;} /* { dg-error {unknown type name '__rvv_int64m2x2_t'} } */
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x2_t'} } */
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;} /* { dg-error {unknown type name '__rvv_int64m2x3_t'} } */
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x3_t'} } */
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;} /* { dg-error {unknown type name '__rvv_int64m2x4_t'} } */
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x4_t'} } */
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;} /* { dg-error {unknown type name '__rvv_int64m4x2_t'} } */
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m4x2_t'} } */
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;}
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;}
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;}
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;}
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;}
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;}
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;}
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;}
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;}
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;}
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;}
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;}
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;}
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;}
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;}
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-8.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-8.c
new file mode 100644
index 00000000000..282ee488be0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-8.c
@@ -0,0 +1,205 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;}
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;}
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;}
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;}
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;}
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;}
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;}
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;}
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;}
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;}
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;}
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;}
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;}
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;}
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;}
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;}
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;}
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;}
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;}
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;}
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;}
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;}
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;}
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;}
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;}
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;}
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;}
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;}
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;}
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;}
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;}
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;}
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;}
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;}
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;}
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;}
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;}
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;}
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;}
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;}
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;}
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;}
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;}
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;}
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;}
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;}
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;}
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;}
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;}
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;}
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;}
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;}
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;}
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;}
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;}
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;}
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;}
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;}
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;}
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;}
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;}
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;}
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;}
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;}
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;}
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;}
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;}
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;}
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;}
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;}
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;}
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;}
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;}
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;}
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;}
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;}
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;}
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;}
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;}
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;}
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;}
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;}
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;}
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;}
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;}
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;}
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;}
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;}
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;}
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;}
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;}
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;}
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;}
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;}
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;}
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;}
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;}
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;}
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;}
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;}
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;}
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;}
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;}
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;}
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;}
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;}
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;}
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;}
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;}
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;}
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;}
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;}
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;}
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;}
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;}
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;}
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;}
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;}
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;}
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;}
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;}
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;}
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;}
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;}
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;}
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;}
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;}
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;}
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;}
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;}
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;}
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;}
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;}
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;}
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;}
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;}
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;}
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;}
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;}
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;}
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;}
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;}
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;}
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;}
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;}
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;}
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;}
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;}
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;}
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;}
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;}
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;}
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;}
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;}
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;}
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;}
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;}
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;}
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;}
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;}
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;}
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;}
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;}
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;}
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;}
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;}
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;}
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;}
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;}
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;}
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;}
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;}
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;}
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;}
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;}
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;}
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;}
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;}
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;}
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;}
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;}
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;}
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;}
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;}
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;}
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;}
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;}
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;}
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;}
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;}
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;}
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;}
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;}
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;}
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;}
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;}
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;}
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;}
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;}
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;}
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;}
+
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-9.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-9.c
new file mode 100644
index 00000000000..37f78d1c7c7
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-9.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */
+
+void f___rvv_int8mf8x2_t () {__rvv_int8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x2_t'} } */
+void f___rvv_uint8mf8x2_t () {__rvv_uint8mf8x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x2_t'} } */
+void f___rvv_int8mf8x3_t () {__rvv_int8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x3_t'} } */
+void f___rvv_uint8mf8x3_t () {__rvv_uint8mf8x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x3_t'} } */
+void f___rvv_int8mf8x4_t () {__rvv_int8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x4_t'} } */
+void f___rvv_uint8mf8x4_t () {__rvv_uint8mf8x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x4_t'} } */
+void f___rvv_int8mf8x5_t () {__rvv_int8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x5_t'} } */
+void f___rvv_uint8mf8x5_t () {__rvv_uint8mf8x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x5_t'} } */
+void f___rvv_int8mf8x6_t () {__rvv_int8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x6_t'} } */
+void f___rvv_uint8mf8x6_t () {__rvv_uint8mf8x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x6_t'} } */
+void f___rvv_int8mf8x7_t () {__rvv_int8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x7_t'} } */
+void f___rvv_uint8mf8x7_t () {__rvv_uint8mf8x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x7_t'} } */
+void f___rvv_int8mf8x8_t () {__rvv_int8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf8x8_t'} } */
+void f___rvv_uint8mf8x8_t () {__rvv_uint8mf8x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf8x8_t'} } */
+void f___rvv_int8mf4x2_t () {__rvv_int8mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x2_t'} } */
+void f___rvv_uint8mf4x2_t () {__rvv_uint8mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x2_t'} } */
+void f___rvv_int8mf4x3_t () {__rvv_int8mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x3_t'} } */
+void f___rvv_uint8mf4x3_t () {__rvv_uint8mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x3_t'} } */
+void f___rvv_int8mf4x4_t () {__rvv_int8mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x4_t'} } */
+void f___rvv_uint8mf4x4_t () {__rvv_uint8mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x4_t'} } */
+void f___rvv_int8mf4x5_t () {__rvv_int8mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x5_t'} } */
+void f___rvv_uint8mf4x5_t () {__rvv_uint8mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x5_t'} } */
+void f___rvv_int8mf4x6_t () {__rvv_int8mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x6_t'} } */
+void f___rvv_uint8mf4x6_t () {__rvv_uint8mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x6_t'} } */
+void f___rvv_int8mf4x7_t () {__rvv_int8mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x7_t'} } */
+void f___rvv_uint8mf4x7_t () {__rvv_uint8mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x7_t'} } */
+void f___rvv_int8mf4x8_t () {__rvv_int8mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf4x8_t'} } */
+void f___rvv_uint8mf4x8_t () {__rvv_uint8mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf4x8_t'} } */
+void f___rvv_int8mf2x2_t () {__rvv_int8mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x2_t'} } */
+void f___rvv_uint8mf2x2_t () {__rvv_uint8mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x2_t'} } */
+void f___rvv_int8mf2x3_t () {__rvv_int8mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x3_t'} } */
+void f___rvv_uint8mf2x3_t () {__rvv_uint8mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x3_t'} } */
+void f___rvv_int8mf2x4_t () {__rvv_int8mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x4_t'} } */
+void f___rvv_uint8mf2x4_t () {__rvv_uint8mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x4_t'} } */
+void f___rvv_int8mf2x5_t () {__rvv_int8mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x5_t'} } */
+void f___rvv_uint8mf2x5_t () {__rvv_uint8mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x5_t'} } */
+void f___rvv_int8mf2x6_t () {__rvv_int8mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x6_t'} } */
+void f___rvv_uint8mf2x6_t () {__rvv_uint8mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x6_t'} } */
+void f___rvv_int8mf2x7_t () {__rvv_int8mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x7_t'} } */
+void f___rvv_uint8mf2x7_t () {__rvv_uint8mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x7_t'} } */
+void f___rvv_int8mf2x8_t () {__rvv_int8mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int8mf2x8_t'} } */
+void f___rvv_uint8mf2x8_t () {__rvv_uint8mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint8mf2x8_t'} } */
+void f___rvv_int8m1x2_t () {__rvv_int8m1x2_t t;} /* { dg-error {unknown type name '__rvv_int8m1x2_t'} } */
+void f___rvv_uint8m1x2_t () {__rvv_uint8m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x2_t'} } */
+void f___rvv_int8m1x3_t () {__rvv_int8m1x3_t t;} /* { dg-error {unknown type name '__rvv_int8m1x3_t'} } */
+void f___rvv_uint8m1x3_t () {__rvv_uint8m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x3_t'} } */
+void f___rvv_int8m1x4_t () {__rvv_int8m1x4_t t;} /* { dg-error {unknown type name '__rvv_int8m1x4_t'} } */
+void f___rvv_uint8m1x4_t () {__rvv_uint8m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x4_t'} } */
+void f___rvv_int8m1x5_t () {__rvv_int8m1x5_t t;} /* { dg-error {unknown type name '__rvv_int8m1x5_t'} } */
+void f___rvv_uint8m1x5_t () {__rvv_uint8m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x5_t'} } */
+void f___rvv_int8m1x6_t () {__rvv_int8m1x6_t t;} /* { dg-error {unknown type name '__rvv_int8m1x6_t'} } */
+void f___rvv_uint8m1x6_t () {__rvv_uint8m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x6_t'} } */
+void f___rvv_int8m1x7_t () {__rvv_int8m1x7_t t;} /* { dg-error {unknown type name '__rvv_int8m1x7_t'} } */
+void f___rvv_uint8m1x7_t () {__rvv_uint8m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x7_t'} } */
+void f___rvv_int8m1x8_t () {__rvv_int8m1x8_t t;} /* { dg-error {unknown type name '__rvv_int8m1x8_t'} } */
+void f___rvv_uint8m1x8_t () {__rvv_uint8m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint8m1x8_t'} } */
+void f___rvv_int8m2x2_t () {__rvv_int8m2x2_t t;} /* { dg-error {unknown type name '__rvv_int8m2x2_t'} } */
+void f___rvv_uint8m2x2_t () {__rvv_uint8m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint8m2x2_t'} } */
+void f___rvv_int8m2x3_t () {__rvv_int8m2x3_t t;} /* { dg-error {unknown type name '__rvv_int8m2x3_t'} } */
+void f___rvv_uint8m2x3_t () {__rvv_uint8m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint8m2x3_t'} } */
+void f___rvv_int8m2x4_t () {__rvv_int8m2x4_t t;} /* { dg-error {unknown type name '__rvv_int8m2x4_t'} } */
+void f___rvv_uint8m2x4_t () {__rvv_uint8m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint8m2x4_t'} } */
+void f___rvv_int8m4x2_t () {__rvv_int8m4x2_t t;} /* { dg-error {unknown type name '__rvv_int8m4x2_t'} } */
+void f___rvv_uint8m4x2_t () {__rvv_uint8m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint8m4x2_t'} } */
+void f___rvv_int16mf4x2_t () {__rvv_int16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x2_t'} } */
+void f___rvv_uint16mf4x2_t () {__rvv_uint16mf4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x2_t'} } */
+void f___rvv_int16mf4x3_t () {__rvv_int16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x3_t'} } */
+void f___rvv_uint16mf4x3_t () {__rvv_uint16mf4x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x3_t'} } */
+void f___rvv_int16mf4x4_t () {__rvv_int16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x4_t'} } */
+void f___rvv_uint16mf4x4_t () {__rvv_uint16mf4x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x4_t'} } */
+void f___rvv_int16mf4x5_t () {__rvv_int16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x5_t'} } */
+void f___rvv_uint16mf4x5_t () {__rvv_uint16mf4x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x5_t'} } */
+void f___rvv_int16mf4x6_t () {__rvv_int16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x6_t'} } */
+void f___rvv_uint16mf4x6_t () {__rvv_uint16mf4x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x6_t'} } */
+void f___rvv_int16mf4x7_t () {__rvv_int16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x7_t'} } */
+void f___rvv_uint16mf4x7_t () {__rvv_uint16mf4x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x7_t'} } */
+void f___rvv_int16mf4x8_t () {__rvv_int16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf4x8_t'} } */
+void f___rvv_uint16mf4x8_t () {__rvv_uint16mf4x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf4x8_t'} } */
+void f___rvv_int16mf2x2_t () {__rvv_int16mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x2_t'} } */
+void f___rvv_uint16mf2x2_t () {__rvv_uint16mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x2_t'} } */
+void f___rvv_int16mf2x3_t () {__rvv_int16mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x3_t'} } */
+void f___rvv_uint16mf2x3_t () {__rvv_uint16mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x3_t'} } */
+void f___rvv_int16mf2x4_t () {__rvv_int16mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x4_t'} } */
+void f___rvv_uint16mf2x4_t () {__rvv_uint16mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x4_t'} } */
+void f___rvv_int16mf2x5_t () {__rvv_int16mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x5_t'} } */
+void f___rvv_uint16mf2x5_t () {__rvv_uint16mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x5_t'} } */
+void f___rvv_int16mf2x6_t () {__rvv_int16mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x6_t'} } */
+void f___rvv_uint16mf2x6_t () {__rvv_uint16mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x6_t'} } */
+void f___rvv_int16mf2x7_t () {__rvv_int16mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x7_t'} } */
+void f___rvv_uint16mf2x7_t () {__rvv_uint16mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x7_t'} } */
+void f___rvv_int16mf2x8_t () {__rvv_int16mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int16mf2x8_t'} } */
+void f___rvv_uint16mf2x8_t () {__rvv_uint16mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint16mf2x8_t'} } */
+void f___rvv_int16m1x2_t () {__rvv_int16m1x2_t t;} /* { dg-error {unknown type name '__rvv_int16m1x2_t'} } */
+void f___rvv_uint16m1x2_t () {__rvv_uint16m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x2_t'} } */
+void f___rvv_int16m1x3_t () {__rvv_int16m1x3_t t;} /* { dg-error {unknown type name '__rvv_int16m1x3_t'} } */
+void f___rvv_uint16m1x3_t () {__rvv_uint16m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x3_t'} } */
+void f___rvv_int16m1x4_t () {__rvv_int16m1x4_t t;} /* { dg-error {unknown type name '__rvv_int16m1x4_t'} } */
+void f___rvv_uint16m1x4_t () {__rvv_uint16m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x4_t'} } */
+void f___rvv_int16m1x5_t () {__rvv_int16m1x5_t t;} /* { dg-error {unknown type name '__rvv_int16m1x5_t'} } */
+void f___rvv_uint16m1x5_t () {__rvv_uint16m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x5_t'} } */
+void f___rvv_int16m1x6_t () {__rvv_int16m1x6_t t;} /* { dg-error {unknown type name '__rvv_int16m1x6_t'} } */
+void f___rvv_uint16m1x6_t () {__rvv_uint16m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x6_t'} } */
+void f___rvv_int16m1x7_t () {__rvv_int16m1x7_t t;} /* { dg-error {unknown type name '__rvv_int16m1x7_t'} } */
+void f___rvv_uint16m1x7_t () {__rvv_uint16m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x7_t'} } */
+void f___rvv_int16m1x8_t () {__rvv_int16m1x8_t t;} /* { dg-error {unknown type name '__rvv_int16m1x8_t'} } */
+void f___rvv_uint16m1x8_t () {__rvv_uint16m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint16m1x8_t'} } */
+void f___rvv_int16m2x2_t () {__rvv_int16m2x2_t t;} /* { dg-error {unknown type name '__rvv_int16m2x2_t'} } */
+void f___rvv_uint16m2x2_t () {__rvv_uint16m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint16m2x2_t'} } */
+void f___rvv_int16m2x3_t () {__rvv_int16m2x3_t t;} /* { dg-error {unknown type name '__rvv_int16m2x3_t'} } */
+void f___rvv_uint16m2x3_t () {__rvv_uint16m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint16m2x3_t'} } */
+void f___rvv_int16m2x4_t () {__rvv_int16m2x4_t t;} /* { dg-error {unknown type name '__rvv_int16m2x4_t'} } */
+void f___rvv_uint16m2x4_t () {__rvv_uint16m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint16m2x4_t'} } */
+void f___rvv_int16m4x2_t () {__rvv_int16m4x2_t t;} /* { dg-error {unknown type name '__rvv_int16m4x2_t'} } */
+void f___rvv_uint16m4x2_t () {__rvv_uint16m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint16m4x2_t'} } */
+void f___rvv_int32mf2x2_t () {__rvv_int32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x2_t'} } */
+void f___rvv_uint32mf2x2_t () {__rvv_uint32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x2_t'} } */
+void f___rvv_int32mf2x3_t () {__rvv_int32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x3_t'} } */
+void f___rvv_uint32mf2x3_t () {__rvv_uint32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x3_t'} } */
+void f___rvv_int32mf2x4_t () {__rvv_int32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x4_t'} } */
+void f___rvv_uint32mf2x4_t () {__rvv_uint32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x4_t'} } */
+void f___rvv_int32mf2x5_t () {__rvv_int32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x5_t'} } */
+void f___rvv_uint32mf2x5_t () {__rvv_uint32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x5_t'} } */
+void f___rvv_int32mf2x6_t () {__rvv_int32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x6_t'} } */
+void f___rvv_uint32mf2x6_t () {__rvv_uint32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x6_t'} } */
+void f___rvv_int32mf2x7_t () {__rvv_int32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x7_t'} } */
+void f___rvv_uint32mf2x7_t () {__rvv_uint32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x7_t'} } */
+void f___rvv_int32mf2x8_t () {__rvv_int32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_int32mf2x8_t'} } */
+void f___rvv_uint32mf2x8_t () {__rvv_uint32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_uint32mf2x8_t'} } */
+void f___rvv_int32m1x2_t () {__rvv_int32m1x2_t t;} /* { dg-error {unknown type name '__rvv_int32m1x2_t'} } */
+void f___rvv_uint32m1x2_t () {__rvv_uint32m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x2_t'} } */
+void f___rvv_int32m1x3_t () {__rvv_int32m1x3_t t;} /* { dg-error {unknown type name '__rvv_int32m1x3_t'} } */
+void f___rvv_uint32m1x3_t () {__rvv_uint32m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x3_t'} } */
+void f___rvv_int32m1x4_t () {__rvv_int32m1x4_t t;} /* { dg-error {unknown type name '__rvv_int32m1x4_t'} } */
+void f___rvv_uint32m1x4_t () {__rvv_uint32m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x4_t'} } */
+void f___rvv_int32m1x5_t () {__rvv_int32m1x5_t t;} /* { dg-error {unknown type name '__rvv_int32m1x5_t'} } */
+void f___rvv_uint32m1x5_t () {__rvv_uint32m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x5_t'} } */
+void f___rvv_int32m1x6_t () {__rvv_int32m1x6_t t;} /* { dg-error {unknown type name '__rvv_int32m1x6_t'} } */
+void f___rvv_uint32m1x6_t () {__rvv_uint32m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x6_t'} } */
+void f___rvv_int32m1x7_t () {__rvv_int32m1x7_t t;} /* { dg-error {unknown type name '__rvv_int32m1x7_t'} } */
+void f___rvv_uint32m1x7_t () {__rvv_uint32m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x7_t'} } */
+void f___rvv_int32m1x8_t () {__rvv_int32m1x8_t t;} /* { dg-error {unknown type name '__rvv_int32m1x8_t'} } */
+void f___rvv_uint32m1x8_t () {__rvv_uint32m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint32m1x8_t'} } */
+void f___rvv_int32m2x2_t () {__rvv_int32m2x2_t t;} /* { dg-error {unknown type name '__rvv_int32m2x2_t'} } */
+void f___rvv_uint32m2x2_t () {__rvv_uint32m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint32m2x2_t'} } */
+void f___rvv_int32m2x3_t () {__rvv_int32m2x3_t t;} /* { dg-error {unknown type name '__rvv_int32m2x3_t'} } */
+void f___rvv_uint32m2x3_t () {__rvv_uint32m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint32m2x3_t'} } */
+void f___rvv_int32m2x4_t () {__rvv_int32m2x4_t t;} /* { dg-error {unknown type name '__rvv_int32m2x4_t'} } */
+void f___rvv_uint32m2x4_t () {__rvv_uint32m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint32m2x4_t'} } */
+void f___rvv_int32m4x2_t () {__rvv_int32m4x2_t t;} /* { dg-error {unknown type name '__rvv_int32m4x2_t'} } */
+void f___rvv_uint32m4x2_t () {__rvv_uint32m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint32m4x2_t'} } */
+void f___rvv_int64m1x2_t () {__rvv_int64m1x2_t t;} /* { dg-error {unknown type name '__rvv_int64m1x2_t'} } */
+void f___rvv_uint64m1x2_t () {__rvv_uint64m1x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x2_t'} } */
+void f___rvv_int64m1x3_t () {__rvv_int64m1x3_t t;} /* { dg-error {unknown type name '__rvv_int64m1x3_t'} } */
+void f___rvv_uint64m1x3_t () {__rvv_uint64m1x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x3_t'} } */
+void f___rvv_int64m1x4_t () {__rvv_int64m1x4_t t;} /* { dg-error {unknown type name '__rvv_int64m1x4_t'} } */
+void f___rvv_uint64m1x4_t () {__rvv_uint64m1x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x4_t'} } */
+void f___rvv_int64m1x5_t () {__rvv_int64m1x5_t t;} /* { dg-error {unknown type name '__rvv_int64m1x5_t'} } */
+void f___rvv_uint64m1x5_t () {__rvv_uint64m1x5_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x5_t'} } */
+void f___rvv_int64m1x6_t () {__rvv_int64m1x6_t t;} /* { dg-error {unknown type name '__rvv_int64m1x6_t'} } */
+void f___rvv_uint64m1x6_t () {__rvv_uint64m1x6_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x6_t'} } */
+void f___rvv_int64m1x7_t () {__rvv_int64m1x7_t t;} /* { dg-error {unknown type name '__rvv_int64m1x7_t'} } */
+void f___rvv_uint64m1x7_t () {__rvv_uint64m1x7_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x7_t'} } */
+void f___rvv_int64m1x8_t () {__rvv_int64m1x8_t t;} /* { dg-error {unknown type name '__rvv_int64m1x8_t'} } */
+void f___rvv_uint64m1x8_t () {__rvv_uint64m1x8_t t;} /* { dg-error {unknown type name '__rvv_uint64m1x8_t'} } */
+void f___rvv_int64m2x2_t () {__rvv_int64m2x2_t t;} /* { dg-error {unknown type name '__rvv_int64m2x2_t'} } */
+void f___rvv_uint64m2x2_t () {__rvv_uint64m2x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x2_t'} } */
+void f___rvv_int64m2x3_t () {__rvv_int64m2x3_t t;} /* { dg-error {unknown type name '__rvv_int64m2x3_t'} } */
+void f___rvv_uint64m2x3_t () {__rvv_uint64m2x3_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x3_t'} } */
+void f___rvv_int64m2x4_t () {__rvv_int64m2x4_t t;} /* { dg-error {unknown type name '__rvv_int64m2x4_t'} } */
+void f___rvv_uint64m2x4_t () {__rvv_uint64m2x4_t t;} /* { dg-error {unknown type name '__rvv_uint64m2x4_t'} } */
+void f___rvv_int64m4x2_t () {__rvv_int64m4x2_t t;} /* { dg-error {unknown type name '__rvv_int64m4x2_t'} } */
+void f___rvv_uint64m4x2_t () {__rvv_uint64m4x2_t t;} /* { dg-error {unknown type name '__rvv_uint64m4x2_t'} } */
+void f___rvv_float32mf2x2_t () {__rvv_float32mf2x2_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x2_t'} } */
+void f___rvv_float32mf2x3_t () {__rvv_float32mf2x3_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x3_t'} } */
+void f___rvv_float32mf2x4_t () {__rvv_float32mf2x4_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x4_t'} } */
+void f___rvv_float32mf2x5_t () {__rvv_float32mf2x5_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x5_t'} } */
+void f___rvv_float32mf2x6_t () {__rvv_float32mf2x6_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x6_t'} } */
+void f___rvv_float32mf2x7_t () {__rvv_float32mf2x7_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x7_t'} } */
+void f___rvv_float32mf2x8_t () {__rvv_float32mf2x8_t t;} /* { dg-error {unknown type name '__rvv_float32mf2x8_t'} } */
+void f___rvv_float32m1x2_t () {__rvv_float32m1x2_t t;} /* { dg-error {unknown type name '__rvv_float32m1x2_t'} } */
+void f___rvv_float32m1x3_t () {__rvv_float32m1x3_t t;} /* { dg-error {unknown type name '__rvv_float32m1x3_t'} } */
+void f___rvv_float32m1x4_t () {__rvv_float32m1x4_t t;} /* { dg-error {unknown type name '__rvv_float32m1x4_t'} } */
+void f___rvv_float32m1x5_t () {__rvv_float32m1x5_t t;} /* { dg-error {unknown type name '__rvv_float32m1x5_t'} } */
+void f___rvv_float32m1x6_t () {__rvv_float32m1x6_t t;} /* { dg-error {unknown type name '__rvv_float32m1x6_t'} } */
+void f___rvv_float32m1x7_t () {__rvv_float32m1x7_t t;} /* { dg-error {unknown type name '__rvv_float32m1x7_t'} } */
+void f___rvv_float32m1x8_t () {__rvv_float32m1x8_t t;} /* { dg-error {unknown type name '__rvv_float32m1x8_t'} } */
+void f___rvv_float32m2x2_t () {__rvv_float32m2x2_t t;} /* { dg-error {unknown type name '__rvv_float32m2x2_t'} } */
+void f___rvv_float32m2x3_t () {__rvv_float32m2x3_t t;} /* { dg-error {unknown type name '__rvv_float32m2x3_t'} } */
+void f___rvv_float32m2x4_t () {__rvv_float32m2x4_t t;} /* { dg-error {unknown type name '__rvv_float32m2x4_t'} } */
+void f___rvv_float32m4x2_t () {__rvv_float32m4x2_t t;} /* { dg-error {unknown type name '__rvv_float32m4x2_t'} } */
+void f___rvv_float64m1x2_t () {__rvv_float64m1x2_t t;} /* { dg-error {unknown type name '__rvv_float64m1x2_t'} } */
+void f___rvv_float64m1x3_t () {__rvv_float64m1x3_t t;} /* { dg-error {unknown type name '__rvv_float64m1x3_t'} } */
+void f___rvv_float64m1x4_t () {__rvv_float64m1x4_t t;} /* { dg-error {unknown type name '__rvv_float64m1x4_t'} } */
+void f___rvv_float64m1x5_t () {__rvv_float64m1x5_t t;} /* { dg-error {unknown type name '__rvv_float64m1x5_t'} } */
+void f___rvv_float64m1x6_t () {__rvv_float64m1x6_t t;} /* { dg-error {unknown type name '__rvv_float64m1x6_t'} } */
+void f___rvv_float64m1x7_t () {__rvv_float64m1x7_t t;} /* { dg-error {unknown type name '__rvv_float64m1x7_t'} } */
+void f___rvv_float64m1x8_t () {__rvv_float64m1x8_t t;} /* { dg-error {unknown type name '__rvv_float64m1x8_t'} } */
+void f___rvv_float64m2x2_t () {__rvv_float64m2x2_t t;} /* { dg-error {unknown type name '__rvv_float64m2x2_t'} } */
+void f___rvv_float64m2x3_t () {__rvv_float64m2x3_t t;} /* { dg-error {unknown type name '__rvv_float64m2x3_t'} } */
+void f___rvv_float64m2x4_t () {__rvv_float64m2x4_t t;} /* { dg-error {unknown type name '__rvv_float64m2x4_t'} } */
+void f___rvv_float64m4x2_t () {__rvv_float64m4x2_t t;} /* { dg-error {unknown type name '__rvv_float64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-1.c
new file mode 100644
index 00000000000..8d3680e4abf
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-1.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8mf8x2_t (void *base, void *out)
+{
+  vint8mf8x2_t v = *(vint8mf8x2_t*)base;
+  *(vint8mf8x2_t*)out = v;
+}
+
+void
+f_vuint8mf8x2_t (void *base, void *out)
+{
+  vuint8mf8x2_t v = *(vuint8mf8x2_t*)base;
+  *(vuint8mf8x2_t*)out = v;
+}
+
+void
+f_vint8mf8x3_t (void *base, void *out)
+{
+  vint8mf8x3_t v = *(vint8mf8x3_t*)base;
+  *(vint8mf8x3_t*)out = v;
+}
+
+void
+f_vuint8mf8x3_t (void *base, void *out)
+{
+  vuint8mf8x3_t v = *(vuint8mf8x3_t*)base;
+  *(vuint8mf8x3_t*)out = v;
+}
+
+void
+f_vint8mf8x4_t (void *base, void *out)
+{
+  vint8mf8x4_t v = *(vint8mf8x4_t*)base;
+  *(vint8mf8x4_t*)out = v;
+}
+
+void
+f_vuint8mf8x4_t (void *base, void *out)
+{
+  vuint8mf8x4_t v = *(vuint8mf8x4_t*)base;
+  *(vuint8mf8x4_t*)out = v;
+}
+
+void
+f_vint8mf8x5_t (void *base, void *out)
+{
+  vint8mf8x5_t v = *(vint8mf8x5_t*)base;
+  *(vint8mf8x5_t*)out = v;
+}
+
+void
+f_vuint8mf8x5_t (void *base, void *out)
+{
+  vuint8mf8x5_t v = *(vuint8mf8x5_t*)base;
+  *(vuint8mf8x5_t*)out = v;
+}
+
+void
+f_vint8mf8x6_t (void *base, void *out)
+{
+  vint8mf8x6_t v = *(vint8mf8x6_t*)base;
+  *(vint8mf8x6_t*)out = v;
+}
+
+void
+f_vuint8mf8x6_t (void *base, void *out)
+{
+  vuint8mf8x6_t v = *(vuint8mf8x6_t*)base;
+  *(vuint8mf8x6_t*)out = v;
+}
+
+void
+f_vint8mf8x7_t (void *base, void *out)
+{
+  vint8mf8x7_t v = *(vint8mf8x7_t*)base;
+  *(vint8mf8x7_t*)out = v;
+}
+
+void
+f_vuint8mf8x7_t (void *base, void *out)
+{
+  vuint8mf8x7_t v = *(vuint8mf8x7_t*)base;
+  *(vuint8mf8x7_t*)out = v;
+}
+
+void
+f_vint8mf8x8_t (void *base, void *out)
+{
+  vint8mf8x8_t v = *(vint8mf8x8_t*)base;
+  *(vint8mf8x8_t*)out = v;
+}
+
+void
+f_vuint8mf8x8_t (void *base, void *out)
+{
+  vuint8mf8x8_t v = *(vuint8mf8x8_t*)base;
+  *(vuint8mf8x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e8,\s*mf8,\s*t[au],\s*m[au]} 14 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vse8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-10.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-10.c
new file mode 100644
index 00000000000..1470e85db2d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-10.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint16m2x2_t (void *base, void *out)
+{
+  vint16m2x2_t v = *(vint16m2x2_t*)base;
+  *(vint16m2x2_t*)out = v;
+}
+
+void
+f_vuint16m2x2_t (void *base, void *out)
+{
+  vuint16m2x2_t v = *(vuint16m2x2_t*)base;
+  *(vuint16m2x2_t*)out = v;
+}
+
+void
+f_vint16m2x3_t (void *base, void *out)
+{
+  vint16m2x3_t v = *(vint16m2x3_t*)base;
+  *(vint16m2x3_t*)out = v;
+}
+
+void
+f_vuint16m2x3_t (void *base, void *out)
+{
+  vuint16m2x3_t v = *(vuint16m2x3_t*)base;
+  *(vuint16m2x3_t*)out = v;
+}
+
+void
+f_vint16m2x4_t (void *base, void *out)
+{
+  vint16m2x4_t v = *(vint16m2x4_t*)base;
+  *(vint16m2x4_t*)out = v;
+}
+
+void
+f_vuint16m2x4_t (void *base, void *out)
+{
+  vuint16m2x4_t v = *(vuint16m2x4_t*)base;
+  *(vuint16m2x4_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl2re16\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
+/* { dg-final { scan-assembler-times {vs2r\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-11.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-11.c
new file mode 100644
index 00000000000..134e633695f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-11.c
@@ -0,0 +1,23 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint16m4x2_t (void *base, void *out)
+{
+  vint16m4x2_t v = *(vint16m4x2_t*)base;
+  *(vint16m4x2_t*)out = v;
+}
+
+void
+f_vuint16m4x2_t (void *base, void *out)
+{
+  vuint16m4x2_t v = *(vuint16m4x2_t*)base;
+  *(vuint16m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl4re16\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
+/* { dg-final { scan-assembler-times {vs4r\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-12.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-12.c
new file mode 100644
index 00000000000..1c6967aa364
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-12.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint32mf2x2_t (void *base, void *out)
+{
+  vint32mf2x2_t v = *(vint32mf2x2_t*)base;
+  *(vint32mf2x2_t*)out = v;
+}
+
+void
+f_vuint32mf2x2_t (void *base, void *out)
+{
+  vuint32mf2x2_t v = *(vuint32mf2x2_t*)base;
+  *(vuint32mf2x2_t*)out = v;
+}
+
+void
+f_vint32mf2x3_t (void *base, void *out)
+{
+  vint32mf2x3_t v = *(vint32mf2x3_t*)base;
+  *(vint32mf2x3_t*)out = v;
+}
+
+void
+f_vuint32mf2x3_t (void *base, void *out)
+{
+  vuint32mf2x3_t v = *(vuint32mf2x3_t*)base;
+  *(vuint32mf2x3_t*)out = v;
+}
+
+void
+f_vint32mf2x4_t (void *base, void *out)
+{
+  vint32mf2x4_t v = *(vint32mf2x4_t*)base;
+  *(vint32mf2x4_t*)out = v;
+}
+
+void
+f_vuint32mf2x4_t (void *base, void *out)
+{
+  vuint32mf2x4_t v = *(vuint32mf2x4_t*)base;
+  *(vuint32mf2x4_t*)out = v;
+}
+
+void
+f_vint32mf2x5_t (void *base, void *out)
+{
+  vint32mf2x5_t v = *(vint32mf2x5_t*)base;
+  *(vint32mf2x5_t*)out = v;
+}
+
+void
+f_vuint32mf2x5_t (void *base, void *out)
+{
+  vuint32mf2x5_t v = *(vuint32mf2x5_t*)base;
+  *(vuint32mf2x5_t*)out = v;
+}
+
+void
+f_vint32mf2x6_t (void *base, void *out)
+{
+  vint32mf2x6_t v = *(vint32mf2x6_t*)base;
+  *(vint32mf2x6_t*)out = v;
+}
+
+void
+f_vuint32mf2x6_t (void *base, void *out)
+{
+  vuint32mf2x6_t v = *(vuint32mf2x6_t*)base;
+  *(vuint32mf2x6_t*)out = v;
+}
+
+void
+f_vint32mf2x7_t (void *base, void *out)
+{
+  vint32mf2x7_t v = *(vint32mf2x7_t*)base;
+  *(vint32mf2x7_t*)out = v;
+}
+
+void
+f_vuint32mf2x7_t (void *base, void *out)
+{
+  vuint32mf2x7_t v = *(vuint32mf2x7_t*)base;
+  *(vuint32mf2x7_t*)out = v;
+}
+
+void
+f_vint32mf2x8_t (void *base, void *out)
+{
+  vint32mf2x8_t v = *(vint32mf2x8_t*)base;
+  *(vint32mf2x8_t*)out = v;
+}
+
+void
+f_vuint32mf2x8_t (void *base, void *out)
+{
+  vuint32mf2x8_t v = *(vuint32mf2x8_t*)base;
+  *(vuint32mf2x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e32,\s*mf2,\s*t[au],\s*m[au]} 14 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle32\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vse32\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-13.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-13.c
new file mode 100644
index 00000000000..30e3e77415f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-13.c
@@ -0,0 +1,107 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint32m1x2_t (void *base, void *out)
+{
+  vint32m1x2_t v = *(vint32m1x2_t*)base;
+  *(vint32m1x2_t*)out = v;
+}
+
+void
+f_vuint32m1x2_t (void *base, void *out)
+{
+  vuint32m1x2_t v = *(vuint32m1x2_t*)base;
+  *(vuint32m1x2_t*)out = v;
+}
+
+void
+f_vint32m1x3_t (void *base, void *out)
+{
+  vint32m1x3_t v = *(vint32m1x3_t*)base;
+  *(vint32m1x3_t*)out = v;
+}
+
+void
+f_vuint32m1x3_t (void *base, void *out)
+{
+  vuint32m1x3_t v = *(vuint32m1x3_t*)base;
+  *(vuint32m1x3_t*)out = v;
+}
+
+void
+f_vint32m1x4_t (void *base, void *out)
+{
+  vint32m1x4_t v = *(vint32m1x4_t*)base;
+  *(vint32m1x4_t*)out = v;
+}
+
+void
+f_vuint32m1x4_t (void *base, void *out)
+{
+  vuint32m1x4_t v = *(vuint32m1x4_t*)base;
+  *(vuint32m1x4_t*)out = v;
+}
+
+void
+f_vint32m1x5_t (void *base, void *out)
+{
+  vint32m1x5_t v = *(vint32m1x5_t*)base;
+  *(vint32m1x5_t*)out = v;
+}
+
+void
+f_vuint32m1x5_t (void *base, void *out)
+{
+  vuint32m1x5_t v = *(vuint32m1x5_t*)base;
+  *(vuint32m1x5_t*)out = v;
+}
+
+void
+f_vint32m1x6_t (void *base, void *out)
+{
+  vint32m1x6_t v = *(vint32m1x6_t*)base;
+  *(vint32m1x6_t*)out = v;
+}
+
+void
+f_vuint32m1x6_t (void *base, void *out)
+{
+  vuint32m1x6_t v = *(vuint32m1x6_t*)base;
+  *(vuint32m1x6_t*)out = v;
+}
+
+void
+f_vint32m1x7_t (void *base, void *out)
+{
+  vint32m1x7_t v = *(vint32m1x7_t*)base;
+  *(vint32m1x7_t*)out = v;
+}
+
+void
+f_vuint32m1x7_t (void *base, void *out)
+{
+  vuint32m1x7_t v = *(vuint32m1x7_t*)base;
+  *(vuint32m1x7_t*)out = v;
+}
+
+void
+f_vint32m1x8_t (void *base, void *out)
+{
+  vint32m1x8_t v = *(vint32m1x8_t*)base;
+  *(vint32m1x8_t*)out = v;
+}
+
+void
+f_vuint32m1x8_t (void *base, void *out)
+{
+  vuint32m1x8_t v = *(vuint32m1x8_t*)base;
+  *(vuint32m1x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vl1re32\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vs1r\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-14.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-14.c
new file mode 100644
index 00000000000..cc612ef55c2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-14.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint32m2x2_t (void *base, void *out)
+{
+  vint32m2x2_t v = *(vint32m2x2_t*)base;
+  *(vint32m2x2_t*)out = v;
+}
+
+void
+f_vuint32m2x2_t (void *base, void *out)
+{
+  vuint32m2x2_t v = *(vuint32m2x2_t*)base;
+  *(vuint32m2x2_t*)out = v;
+}
+
+void
+f_vint32m2x3_t (void *base, void *out)
+{
+  vint32m2x3_t v = *(vint32m2x3_t*)base;
+  *(vint32m2x3_t*)out = v;
+}
+
+void
+f_vuint32m2x3_t (void *base, void *out)
+{
+  vuint32m2x3_t v = *(vuint32m2x3_t*)base;
+  *(vuint32m2x3_t*)out = v;
+}
+
+void
+f_vint32m2x4_t (void *base, void *out)
+{
+  vint32m2x4_t v = *(vint32m2x4_t*)base;
+  *(vint32m2x4_t*)out = v;
+}
+
+void
+f_vuint32m2x4_t (void *base, void *out)
+{
+  vuint32m2x4_t v = *(vuint32m2x4_t*)base;
+  *(vuint32m2x4_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl2re32\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
+/* { dg-final { scan-assembler-times {vs2r\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-15.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-15.c
new file mode 100644
index 00000000000..247155431c5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-15.c
@@ -0,0 +1,23 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint32m4x2_t (void *base, void *out)
+{
+  vint32m4x2_t v = *(vint32m4x2_t*)base;
+  *(vint32m4x2_t*)out = v;
+}
+
+void
+f_vuint32m4x2_t (void *base, void *out)
+{
+  vuint32m4x2_t v = *(vuint32m4x2_t*)base;
+  *(vuint32m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl4re32\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
+/* { dg-final { scan-assembler-times {vs4r\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-16.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-16.c
new file mode 100644
index 00000000000..5bdc84f4b37
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-16.c
@@ -0,0 +1,107 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint64m1x2_t (void *base, void *out)
+{
+  vint64m1x2_t v = *(vint64m1x2_t*)base;
+  *(vint64m1x2_t*)out = v;
+}
+
+void
+f_vuint64m1x2_t (void *base, void *out)
+{
+  vuint64m1x2_t v = *(vuint64m1x2_t*)base;
+  *(vuint64m1x2_t*)out = v;
+}
+
+void
+f_vint64m1x3_t (void *base, void *out)
+{
+  vint64m1x3_t v = *(vint64m1x3_t*)base;
+  *(vint64m1x3_t*)out = v;
+}
+
+void
+f_vuint64m1x3_t (void *base, void *out)
+{
+  vuint64m1x3_t v = *(vuint64m1x3_t*)base;
+  *(vuint64m1x3_t*)out = v;
+}
+
+void
+f_vint64m1x4_t (void *base, void *out)
+{
+  vint64m1x4_t v = *(vint64m1x4_t*)base;
+  *(vint64m1x4_t*)out = v;
+}
+
+void
+f_vuint64m1x4_t (void *base, void *out)
+{
+  vuint64m1x4_t v = *(vuint64m1x4_t*)base;
+  *(vuint64m1x4_t*)out = v;
+}
+
+void
+f_vint64m1x5_t (void *base, void *out)
+{
+  vint64m1x5_t v = *(vint64m1x5_t*)base;
+  *(vint64m1x5_t*)out = v;
+}
+
+void
+f_vuint64m1x5_t (void *base, void *out)
+{
+  vuint64m1x5_t v = *(vuint64m1x5_t*)base;
+  *(vuint64m1x5_t*)out = v;
+}
+
+void
+f_vint64m1x6_t (void *base, void *out)
+{
+  vint64m1x6_t v = *(vint64m1x6_t*)base;
+  *(vint64m1x6_t*)out = v;
+}
+
+void
+f_vuint64m1x6_t (void *base, void *out)
+{
+  vuint64m1x6_t v = *(vuint64m1x6_t*)base;
+  *(vuint64m1x6_t*)out = v;
+}
+
+void
+f_vint64m1x7_t (void *base, void *out)
+{
+  vint64m1x7_t v = *(vint64m1x7_t*)base;
+  *(vint64m1x7_t*)out = v;
+}
+
+void
+f_vuint64m1x7_t (void *base, void *out)
+{
+  vuint64m1x7_t v = *(vuint64m1x7_t*)base;
+  *(vuint64m1x7_t*)out = v;
+}
+
+void
+f_vint64m1x8_t (void *base, void *out)
+{
+  vint64m1x8_t v = *(vint64m1x8_t*)base;
+  *(vint64m1x8_t*)out = v;
+}
+
+void
+f_vuint64m1x8_t (void *base, void *out)
+{
+  vuint64m1x8_t v = *(vuint64m1x8_t*)base;
+  *(vuint64m1x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vl1re64\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vs1r\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-17.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-17.c
new file mode 100644
index 00000000000..818aa18b711
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-17.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint64m2x2_t (void *base, void *out)
+{
+  vint64m2x2_t v = *(vint64m2x2_t*)base;
+  *(vint64m2x2_t*)out = v;
+}
+
+void
+f_vuint64m2x2_t (void *base, void *out)
+{
+  vuint64m2x2_t v = *(vuint64m2x2_t*)base;
+  *(vuint64m2x2_t*)out = v;
+}
+
+void
+f_vint64m2x3_t (void *base, void *out)
+{
+  vint64m2x3_t v = *(vint64m2x3_t*)base;
+  *(vint64m2x3_t*)out = v;
+}
+
+void
+f_vuint64m2x3_t (void *base, void *out)
+{
+  vuint64m2x3_t v = *(vuint64m2x3_t*)base;
+  *(vuint64m2x3_t*)out = v;
+}
+
+void
+f_vint64m2x4_t (void *base, void *out)
+{
+  vint64m2x4_t v = *(vint64m2x4_t*)base;
+  *(vint64m2x4_t*)out = v;
+}
+
+void
+f_vuint64m2x4_t (void *base, void *out)
+{
+  vuint64m2x4_t v = *(vuint64m2x4_t*)base;
+  *(vuint64m2x4_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl2re64\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
+/* { dg-final { scan-assembler-times {vs2r\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-18.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-18.c
new file mode 100644
index 00000000000..599340c88c1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-18.c
@@ -0,0 +1,23 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint64m4x2_t (void *base, void *out)
+{
+  vint64m4x2_t v = *(vint64m4x2_t*)base;
+  *(vint64m4x2_t*)out = v;
+}
+
+void
+f_vuint64m4x2_t (void *base, void *out)
+{
+  vuint64m4x2_t v = *(vuint64m4x2_t*)base;
+  *(vuint64m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl4re64\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
+/* { dg-final { scan-assembler-times {vs4r\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-19.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-19.c
new file mode 100644
index 00000000000..bd22e8dd904
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-19.c
@@ -0,0 +1,59 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat32mf2x2_t (void *base, void *out)
+{
+  vfloat32mf2x2_t v = *(vfloat32mf2x2_t*)base;
+  *(vfloat32mf2x2_t*)out = v;
+}
+
+void
+f_vfloat32mf2x3_t (void *base, void *out)
+{
+  vfloat32mf2x3_t v = *(vfloat32mf2x3_t*)base;
+  *(vfloat32mf2x3_t*)out = v;
+}
+
+void
+f_vfloat32mf2x4_t (void *base, void *out)
+{
+  vfloat32mf2x4_t v = *(vfloat32mf2x4_t*)base;
+  *(vfloat32mf2x4_t*)out = v;
+}
+
+void
+f_vfloat32mf2x5_t (void *base, void *out)
+{
+  vfloat32mf2x5_t v = *(vfloat32mf2x5_t*)base;
+  *(vfloat32mf2x5_t*)out = v;
+}
+
+void
+f_vfloat32mf2x6_t (void *base, void *out)
+{
+  vfloat32mf2x6_t v = *(vfloat32mf2x6_t*)base;
+  *(vfloat32mf2x6_t*)out = v;
+}
+
+void
+f_vfloat32mf2x7_t (void *base, void *out)
+{
+  vfloat32mf2x7_t v = *(vfloat32mf2x7_t*)base;
+  *(vfloat32mf2x7_t*)out = v;
+}
+
+void
+f_vfloat32mf2x8_t (void *base, void *out)
+{
+  vfloat32mf2x8_t v = *(vfloat32mf2x8_t*)base;
+  *(vfloat32mf2x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e32,\s*mf2,\s*t[au],\s*m[au]} 7 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle32\.v\tv[0-9]+,0\([a-x0-9]+\)} 35 } } */
+/* { dg-final { scan-assembler-times {vse32\.v\tv[0-9]+,0\([a-x0-9]+\)} 35 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-2.c
new file mode 100644
index 00000000000..68e9d3cf8ce
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-2.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8mf4x2_t (void *base, void *out)
+{
+  vint8mf4x2_t v = *(vint8mf4x2_t*)base;
+  *(vint8mf4x2_t*)out = v;
+}
+
+void
+f_vuint8mf4x2_t (void *base, void *out)
+{
+  vuint8mf4x2_t v = *(vuint8mf4x2_t*)base;
+  *(vuint8mf4x2_t*)out = v;
+}
+
+void
+f_vint8mf4x3_t (void *base, void *out)
+{
+  vint8mf4x3_t v = *(vint8mf4x3_t*)base;
+  *(vint8mf4x3_t*)out = v;
+}
+
+void
+f_vuint8mf4x3_t (void *base, void *out)
+{
+  vuint8mf4x3_t v = *(vuint8mf4x3_t*)base;
+  *(vuint8mf4x3_t*)out = v;
+}
+
+void
+f_vint8mf4x4_t (void *base, void *out)
+{
+  vint8mf4x4_t v = *(vint8mf4x4_t*)base;
+  *(vint8mf4x4_t*)out = v;
+}
+
+void
+f_vuint8mf4x4_t (void *base, void *out)
+{
+  vuint8mf4x4_t v = *(vuint8mf4x4_t*)base;
+  *(vuint8mf4x4_t*)out = v;
+}
+
+void
+f_vint8mf4x5_t (void *base, void *out)
+{
+  vint8mf4x5_t v = *(vint8mf4x5_t*)base;
+  *(vint8mf4x5_t*)out = v;
+}
+
+void
+f_vuint8mf4x5_t (void *base, void *out)
+{
+  vuint8mf4x5_t v = *(vuint8mf4x5_t*)base;
+  *(vuint8mf4x5_t*)out = v;
+}
+
+void
+f_vint8mf4x6_t (void *base, void *out)
+{
+  vint8mf4x6_t v = *(vint8mf4x6_t*)base;
+  *(vint8mf4x6_t*)out = v;
+}
+
+void
+f_vuint8mf4x6_t (void *base, void *out)
+{
+  vuint8mf4x6_t v = *(vuint8mf4x6_t*)base;
+  *(vuint8mf4x6_t*)out = v;
+}
+
+void
+f_vint8mf4x7_t (void *base, void *out)
+{
+  vint8mf4x7_t v = *(vint8mf4x7_t*)base;
+  *(vint8mf4x7_t*)out = v;
+}
+
+void
+f_vuint8mf4x7_t (void *base, void *out)
+{
+  vuint8mf4x7_t v = *(vuint8mf4x7_t*)base;
+  *(vuint8mf4x7_t*)out = v;
+}
+
+void
+f_vint8mf4x8_t (void *base, void *out)
+{
+  vint8mf4x8_t v = *(vint8mf4x8_t*)base;
+  *(vint8mf4x8_t*)out = v;
+}
+
+void
+f_vuint8mf4x8_t (void *base, void *out)
+{
+  vuint8mf4x8_t v = *(vuint8mf4x8_t*)base;
+  *(vuint8mf4x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e8,\s*mf4,\s*t[au],\s*m[au]} 14 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vse8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-20.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-20.c
new file mode 100644
index 00000000000..305b7f62a3a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-20.c
@@ -0,0 +1,58 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat32m1x2_t (void *base, void *out)
+{
+  vfloat32m1x2_t v = *(vfloat32m1x2_t*)base;
+  *(vfloat32m1x2_t*)out = v;
+}
+
+void
+f_vfloat32m1x3_t (void *base, void *out)
+{
+  vfloat32m1x3_t v = *(vfloat32m1x3_t*)base;
+  *(vfloat32m1x3_t*)out = v;
+}
+
+void
+f_vfloat32m1x4_t (void *base, void *out)
+{
+  vfloat32m1x4_t v = *(vfloat32m1x4_t*)base;
+  *(vfloat32m1x4_t*)out = v;
+}
+
+void
+f_vfloat32m1x5_t (void *base, void *out)
+{
+  vfloat32m1x5_t v = *(vfloat32m1x5_t*)base;
+  *(vfloat32m1x5_t*)out = v;
+}
+
+void
+f_vfloat32m1x6_t (void *base, void *out)
+{
+  vfloat32m1x6_t v = *(vfloat32m1x6_t*)base;
+  *(vfloat32m1x6_t*)out = v;
+}
+
+void
+f_vfloat32m1x7_t (void *base, void *out)
+{
+  vfloat32m1x7_t v = *(vfloat32m1x7_t*)base;
+  *(vfloat32m1x7_t*)out = v;
+}
+
+void
+f_vfloat32m1x8_t (void *base, void *out)
+{
+  vfloat32m1x8_t v = *(vfloat32m1x8_t*)base;
+  *(vfloat32m1x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vl1re32\.v\tv[0-9]+,0\([a-x0-9]+\)} 35 } } */
+/* { dg-final { scan-assembler-times {vs1r\.v\tv[0-9]+,0\([a-x0-9]+\)} 35 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-21.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-21.c
new file mode 100644
index 00000000000..9ff8efa7095
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-21.c
@@ -0,0 +1,30 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat32m2x2_t (void *base, void *out)
+{
+  vfloat32m2x2_t v = *(vfloat32m2x2_t*)base;
+  *(vfloat32m2x2_t*)out = v;
+}
+
+void
+f_vfloat32m2x3_t (void *base, void *out)
+{
+  vfloat32m2x3_t v = *(vfloat32m2x3_t*)base;
+  *(vfloat32m2x3_t*)out = v;
+}
+
+void
+f_vfloat32m2x4_t (void *base, void *out)
+{
+  vfloat32m2x4_t v = *(vfloat32m2x4_t*)base;
+  *(vfloat32m2x4_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl2re32\.v\tv[0-9]+,0\([a-x0-9]+\)} 9 } } */
+/* { dg-final { scan-assembler-times {vs2r\.v\tv[0-9]+,0\([a-x0-9]+\)} 9 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-22.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-22.c
new file mode 100644
index 00000000000..60bdfebd4ae
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-22.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat32m4x2_t (void *base, void *out)
+{
+  vfloat32m4x2_t v = *(vfloat32m4x2_t*)base;
+  *(vfloat32m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl4re32\.v\tv[0-9]+,0\([a-x0-9]+\)} 2 } } */
+/* { dg-final { scan-assembler-times {vs4r\.v\tv[0-9]+,0\([a-x0-9]+\)} 2 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-23.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-23.c
new file mode 100644
index 00000000000..0e89365ac2a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-23.c
@@ -0,0 +1,58 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat64m1x2_t (void *base, void *out)
+{
+  vfloat64m1x2_t v = *(vfloat64m1x2_t*)base;
+  *(vfloat64m1x2_t*)out = v;
+}
+
+void
+f_vfloat64m1x3_t (void *base, void *out)
+{
+  vfloat64m1x3_t v = *(vfloat64m1x3_t*)base;
+  *(vfloat64m1x3_t*)out = v;
+}
+
+void
+f_vfloat64m1x4_t (void *base, void *out)
+{
+  vfloat64m1x4_t v = *(vfloat64m1x4_t*)base;
+  *(vfloat64m1x4_t*)out = v;
+}
+
+void
+f_vfloat64m1x5_t (void *base, void *out)
+{
+  vfloat64m1x5_t v = *(vfloat64m1x5_t*)base;
+  *(vfloat64m1x5_t*)out = v;
+}
+
+void
+f_vfloat64m1x6_t (void *base, void *out)
+{
+  vfloat64m1x6_t v = *(vfloat64m1x6_t*)base;
+  *(vfloat64m1x6_t*)out = v;
+}
+
+void
+f_vfloat64m1x7_t (void *base, void *out)
+{
+  vfloat64m1x7_t v = *(vfloat64m1x7_t*)base;
+  *(vfloat64m1x7_t*)out = v;
+}
+
+void
+f_vfloat64m1x8_t (void *base, void *out)
+{
+  vfloat64m1x8_t v = *(vfloat64m1x8_t*)base;
+  *(vfloat64m1x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vl1re64\.v\tv[0-9]+,0\([a-x0-9]+\)} 35 } } */
+/* { dg-final { scan-assembler-times {vs1r\.v\tv[0-9]+,0\([a-x0-9]+\)} 35 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-24.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-24.c
new file mode 100644
index 00000000000..7708a8fab90
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-24.c
@@ -0,0 +1,30 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat64m2x2_t (void *base, void *out)
+{
+  vfloat64m2x2_t v = *(vfloat64m2x2_t*)base;
+  *(vfloat64m2x2_t*)out = v;
+}
+
+void
+f_vfloat64m2x3_t (void *base, void *out)
+{
+  vfloat64m2x3_t v = *(vfloat64m2x3_t*)base;
+  *(vfloat64m2x3_t*)out = v;
+}
+
+void
+f_vfloat64m2x4_t (void *base, void *out)
+{
+  vfloat64m2x4_t v = *(vfloat64m2x4_t*)base;
+  *(vfloat64m2x4_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl2re64\.v\tv[0-9]+,0\([a-x0-9]+\)} 9 } } */
+/* { dg-final { scan-assembler-times {vs2r\.v\tv[0-9]+,0\([a-x0-9]+\)} 9 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-25.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-25.c
new file mode 100644
index 00000000000..ffa5796df28
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-25.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vfloat64m4x2_t (void *base, void *out)
+{
+  vfloat64m4x2_t v = *(vfloat64m4x2_t*)base;
+  *(vfloat64m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl4re64\.v\tv[0-9]+,0\([a-x0-9]+\)} 2 } } */
+/* { dg-final { scan-assembler-times {vs4r\.v\tv[0-9]+,0\([a-x0-9]+\)} 2 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-26.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-26.c
new file mode 100644
index 00000000000..67c660538a5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-26.c
@@ -0,0 +1,34 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8mf8x2_t (void *out)
+{
+  vint8mf8x2_t v;
+  *(vint8mf8x2_t*)out = v;
+}
+
+void
+f_vint8m1x2_t (void *out)
+{
+  vint8m1x2_t v;
+  *(vint8m1x2_t*)out = v;
+}
+
+void
+f_vfloat32mf2x8_t (void *base, void *out)
+{
+  vfloat32mf2x8_t v;
+  *(vfloat32mf2x8_t*)out = v;
+}
+
+void
+f_vfloat64m4x2_t (void *base, void *out)
+{
+  vfloat64m4x2_t v;
+  *(vfloat64m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vmv.v.i\tv[0-9]+,\s*0} 4 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-27.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-27.c
new file mode 100644
index 00000000000..89d8c697072
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-27.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void mov0 (int8_t *in, int8_t *out) 
+{ 
+  register vint8mf4x3_t v1 asm("v1") = *(vint8mf4x3_t*)in; 
+  asm volatile ("# %0"::"vr"(v1)); 
+  register vint8mf4x3_t v2 asm("v2") = v1; 
+  *(vint8mf4x3_t*)out = v2; 
+  asm volatile ("# %0"::"vr"(v2)); 
+}
+
+void mov1 (int8_t *in, int8_t *out) 
+{ 
+  register vint8mf4x3_t v1 asm("v2") = *(vint8mf4x3_t*)in; 
+  asm volatile ("# %0"::"vr"(v1)); 
+  register vint8mf4x3_t v2 asm("v1") = v1; 
+  *(vint8mf4x3_t*)out = v2; 
+  asm volatile ("# %0"::"vr"(v2)); 
+}
+
+/* { dg-final { scan-assembler-times {vmv1r\.v\tv4,v3} 1 } } */
+/* { dg-final { scan-assembler-times {vmv1r\.v\tv3,v2} 1 } } */
+/* { dg-final { scan-assembler-times {vmv1r\.v\tv2,v1} 1 } } */
+/* { dg-final { scan-assembler-times {vmv1r\.v\tv1,v2} 1 } } */
+/* { dg-final { scan-assembler-times {vmv1r\.v\tv2,v3} 1 } } */
+/* { dg-final { scan-assembler-times {vmv1r\.v\tv3,v4} 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-3.c
new file mode 100644
index 00000000000..52e61c70165
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-3.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8mf2x2_t (void *base, void *out)
+{
+  vint8mf2x2_t v = *(vint8mf2x2_t*)base;
+  *(vint8mf2x2_t*)out = v;
+}
+
+void
+f_vuint8mf2x2_t (void *base, void *out)
+{
+  vuint8mf2x2_t v = *(vuint8mf2x2_t*)base;
+  *(vuint8mf2x2_t*)out = v;
+}
+
+void
+f_vint8mf2x3_t (void *base, void *out)
+{
+  vint8mf2x3_t v = *(vint8mf2x3_t*)base;
+  *(vint8mf2x3_t*)out = v;
+}
+
+void
+f_vuint8mf2x3_t (void *base, void *out)
+{
+  vuint8mf2x3_t v = *(vuint8mf2x3_t*)base;
+  *(vuint8mf2x3_t*)out = v;
+}
+
+void
+f_vint8mf2x4_t (void *base, void *out)
+{
+  vint8mf2x4_t v = *(vint8mf2x4_t*)base;
+  *(vint8mf2x4_t*)out = v;
+}
+
+void
+f_vuint8mf2x4_t (void *base, void *out)
+{
+  vuint8mf2x4_t v = *(vuint8mf2x4_t*)base;
+  *(vuint8mf2x4_t*)out = v;
+}
+
+void
+f_vint8mf2x5_t (void *base, void *out)
+{
+  vint8mf2x5_t v = *(vint8mf2x5_t*)base;
+  *(vint8mf2x5_t*)out = v;
+}
+
+void
+f_vuint8mf2x5_t (void *base, void *out)
+{
+  vuint8mf2x5_t v = *(vuint8mf2x5_t*)base;
+  *(vuint8mf2x5_t*)out = v;
+}
+
+void
+f_vint8mf2x6_t (void *base, void *out)
+{
+  vint8mf2x6_t v = *(vint8mf2x6_t*)base;
+  *(vint8mf2x6_t*)out = v;
+}
+
+void
+f_vuint8mf2x6_t (void *base, void *out)
+{
+  vuint8mf2x6_t v = *(vuint8mf2x6_t*)base;
+  *(vuint8mf2x6_t*)out = v;
+}
+
+void
+f_vint8mf2x7_t (void *base, void *out)
+{
+  vint8mf2x7_t v = *(vint8mf2x7_t*)base;
+  *(vint8mf2x7_t*)out = v;
+}
+
+void
+f_vuint8mf2x7_t (void *base, void *out)
+{
+  vuint8mf2x7_t v = *(vuint8mf2x7_t*)base;
+  *(vuint8mf2x7_t*)out = v;
+}
+
+void
+f_vint8mf2x8_t (void *base, void *out)
+{
+  vint8mf2x8_t v = *(vint8mf2x8_t*)base;
+  *(vint8mf2x8_t*)out = v;
+}
+
+void
+f_vuint8mf2x8_t (void *base, void *out)
+{
+  vuint8mf2x8_t v = *(vuint8mf2x8_t*)base;
+  *(vuint8mf2x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e8,\s*mf2,\s*t[au],\s*m[au]} 14 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vse8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-4.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-4.c
new file mode 100644
index 00000000000..029d11c5087
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-4.c
@@ -0,0 +1,107 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8m1x2_t (void *base, void *out)
+{
+  vint8m1x2_t v = *(vint8m1x2_t*)base;
+  *(vint8m1x2_t*)out = v;
+}
+
+void
+f_vuint8m1x2_t (void *base, void *out)
+{
+  vuint8m1x2_t v = *(vuint8m1x2_t*)base;
+  *(vuint8m1x2_t*)out = v;
+}
+
+void
+f_vint8m1x3_t (void *base, void *out)
+{
+  vint8m1x3_t v = *(vint8m1x3_t*)base;
+  *(vint8m1x3_t*)out = v;
+}
+
+void
+f_vuint8m1x3_t (void *base, void *out)
+{
+  vuint8m1x3_t v = *(vuint8m1x3_t*)base;
+  *(vuint8m1x3_t*)out = v;
+}
+
+void
+f_vint8m1x4_t (void *base, void *out)
+{
+  vint8m1x4_t v = *(vint8m1x4_t*)base;
+  *(vint8m1x4_t*)out = v;
+}
+
+void
+f_vuint8m1x4_t (void *base, void *out)
+{
+  vuint8m1x4_t v = *(vuint8m1x4_t*)base;
+  *(vuint8m1x4_t*)out = v;
+}
+
+void
+f_vint8m1x5_t (void *base, void *out)
+{
+  vint8m1x5_t v = *(vint8m1x5_t*)base;
+  *(vint8m1x5_t*)out = v;
+}
+
+void
+f_vuint8m1x5_t (void *base, void *out)
+{
+  vuint8m1x5_t v = *(vuint8m1x5_t*)base;
+  *(vuint8m1x5_t*)out = v;
+}
+
+void
+f_vint8m1x6_t (void *base, void *out)
+{
+  vint8m1x6_t v = *(vint8m1x6_t*)base;
+  *(vint8m1x6_t*)out = v;
+}
+
+void
+f_vuint8m1x6_t (void *base, void *out)
+{
+  vuint8m1x6_t v = *(vuint8m1x6_t*)base;
+  *(vuint8m1x6_t*)out = v;
+}
+
+void
+f_vint8m1x7_t (void *base, void *out)
+{
+  vint8m1x7_t v = *(vint8m1x7_t*)base;
+  *(vint8m1x7_t*)out = v;
+}
+
+void
+f_vuint8m1x7_t (void *base, void *out)
+{
+  vuint8m1x7_t v = *(vuint8m1x7_t*)base;
+  *(vuint8m1x7_t*)out = v;
+}
+
+void
+f_vint8m1x8_t (void *base, void *out)
+{
+  vint8m1x8_t v = *(vint8m1x8_t*)base;
+  *(vint8m1x8_t*)out = v;
+}
+
+void
+f_vuint8m1x8_t (void *base, void *out)
+{
+  vuint8m1x8_t v = *(vuint8m1x8_t*)base;
+  *(vuint8m1x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vl1re8\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vs1r\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-5.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-5.c
new file mode 100644
index 00000000000..e1ca116603c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-5.c
@@ -0,0 +1,51 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8m2x2_t (void *base, void *out)
+{
+  vint8m2x2_t v = *(vint8m2x2_t*)base;
+  *(vint8m2x2_t*)out = v;
+}
+
+void
+f_vuint8m2x2_t (void *base, void *out)
+{
+  vuint8m2x2_t v = *(vuint8m2x2_t*)base;
+  *(vuint8m2x2_t*)out = v;
+}
+
+void
+f_vint8m2x3_t (void *base, void *out)
+{
+  vint8m2x3_t v = *(vint8m2x3_t*)base;
+  *(vint8m2x3_t*)out = v;
+}
+
+void
+f_vuint8m2x3_t (void *base, void *out)
+{
+  vuint8m2x3_t v = *(vuint8m2x3_t*)base;
+  *(vuint8m2x3_t*)out = v;
+}
+
+void
+f_vint8m2x4_t (void *base, void *out)
+{
+  vint8m2x4_t v = *(vint8m2x4_t*)base;
+  *(vint8m2x4_t*)out = v;
+}
+
+void
+f_vuint8m2x4_t (void *base, void *out)
+{
+  vuint8m2x4_t v = *(vuint8m2x4_t*)base;
+  *(vuint8m2x4_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl2re8\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
+/* { dg-final { scan-assembler-times {vs2r\.v\tv[0-9]+,0\([a-x0-9]+\)} 18 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-6.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-6.c
new file mode 100644
index 00000000000..0c9741bc18f
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-6.c
@@ -0,0 +1,23 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint8m4x2_t (void *base, void *out)
+{
+  vint8m4x2_t v = *(vint8m4x2_t*)base;
+  *(vint8m4x2_t*)out = v;
+}
+
+void
+f_vuint8m4x2_t (void *base, void *out)
+{
+  vuint8m4x2_t v = *(vuint8m4x2_t*)base;
+  *(vuint8m4x2_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler {slli} } } */
+/* { dg-final { scan-assembler-times {vl4re8\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
+/* { dg-final { scan-assembler-times {vs4r\.v\tv[0-9]+,0\([a-x0-9]+\)} 4 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-7.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-7.c
new file mode 100644
index 00000000000..f0a624e82a2
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-7.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint16mf4x2_t (void *base, void *out)
+{
+  vint16mf4x2_t v = *(vint16mf4x2_t*)base;
+  *(vint16mf4x2_t*)out = v;
+}
+
+void
+f_vuint16mf4x2_t (void *base, void *out)
+{
+  vuint16mf4x2_t v = *(vuint16mf4x2_t*)base;
+  *(vuint16mf4x2_t*)out = v;
+}
+
+void
+f_vint16mf4x3_t (void *base, void *out)
+{
+  vint16mf4x3_t v = *(vint16mf4x3_t*)base;
+  *(vint16mf4x3_t*)out = v;
+}
+
+void
+f_vuint16mf4x3_t (void *base, void *out)
+{
+  vuint16mf4x3_t v = *(vuint16mf4x3_t*)base;
+  *(vuint16mf4x3_t*)out = v;
+}
+
+void
+f_vint16mf4x4_t (void *base, void *out)
+{
+  vint16mf4x4_t v = *(vint16mf4x4_t*)base;
+  *(vint16mf4x4_t*)out = v;
+}
+
+void
+f_vuint16mf4x4_t (void *base, void *out)
+{
+  vuint16mf4x4_t v = *(vuint16mf4x4_t*)base;
+  *(vuint16mf4x4_t*)out = v;
+}
+
+void
+f_vint16mf4x5_t (void *base, void *out)
+{
+  vint16mf4x5_t v = *(vint16mf4x5_t*)base;
+  *(vint16mf4x5_t*)out = v;
+}
+
+void
+f_vuint16mf4x5_t (void *base, void *out)
+{
+  vuint16mf4x5_t v = *(vuint16mf4x5_t*)base;
+  *(vuint16mf4x5_t*)out = v;
+}
+
+void
+f_vint16mf4x6_t (void *base, void *out)
+{
+  vint16mf4x6_t v = *(vint16mf4x6_t*)base;
+  *(vint16mf4x6_t*)out = v;
+}
+
+void
+f_vuint16mf4x6_t (void *base, void *out)
+{
+  vuint16mf4x6_t v = *(vuint16mf4x6_t*)base;
+  *(vuint16mf4x6_t*)out = v;
+}
+
+void
+f_vint16mf4x7_t (void *base, void *out)
+{
+  vint16mf4x7_t v = *(vint16mf4x7_t*)base;
+  *(vint16mf4x7_t*)out = v;
+}
+
+void
+f_vuint16mf4x7_t (void *base, void *out)
+{
+  vuint16mf4x7_t v = *(vuint16mf4x7_t*)base;
+  *(vuint16mf4x7_t*)out = v;
+}
+
+void
+f_vint16mf4x8_t (void *base, void *out)
+{
+  vint16mf4x8_t v = *(vint16mf4x8_t*)base;
+  *(vint16mf4x8_t*)out = v;
+}
+
+void
+f_vuint16mf4x8_t (void *base, void *out)
+{
+  vuint16mf4x8_t v = *(vuint16mf4x8_t*)base;
+  *(vuint16mf4x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e16,\s*mf4,\s*t[au],\s*m[au]} 14 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle16\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vse16\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-8.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-8.c
new file mode 100644
index 00000000000..ce294ae8ff8
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-8.c
@@ -0,0 +1,108 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint16mf2x2_t (void *base, void *out)
+{
+  vint16mf2x2_t v = *(vint16mf2x2_t*)base;
+  *(vint16mf2x2_t*)out = v;
+}
+
+void
+f_vuint16mf2x2_t (void *base, void *out)
+{
+  vuint16mf2x2_t v = *(vuint16mf2x2_t*)base;
+  *(vuint16mf2x2_t*)out = v;
+}
+
+void
+f_vint16mf2x3_t (void *base, void *out)
+{
+  vint16mf2x3_t v = *(vint16mf2x3_t*)base;
+  *(vint16mf2x3_t*)out = v;
+}
+
+void
+f_vuint16mf2x3_t (void *base, void *out)
+{
+  vuint16mf2x3_t v = *(vuint16mf2x3_t*)base;
+  *(vuint16mf2x3_t*)out = v;
+}
+
+void
+f_vint16mf2x4_t (void *base, void *out)
+{
+  vint16mf2x4_t v = *(vint16mf2x4_t*)base;
+  *(vint16mf2x4_t*)out = v;
+}
+
+void
+f_vuint16mf2x4_t (void *base, void *out)
+{
+  vuint16mf2x4_t v = *(vuint16mf2x4_t*)base;
+  *(vuint16mf2x4_t*)out = v;
+}
+
+void
+f_vint16mf2x5_t (void *base, void *out)
+{
+  vint16mf2x5_t v = *(vint16mf2x5_t*)base;
+  *(vint16mf2x5_t*)out = v;
+}
+
+void
+f_vuint16mf2x5_t (void *base, void *out)
+{
+  vuint16mf2x5_t v = *(vuint16mf2x5_t*)base;
+  *(vuint16mf2x5_t*)out = v;
+}
+
+void
+f_vint16mf2x6_t (void *base, void *out)
+{
+  vint16mf2x6_t v = *(vint16mf2x6_t*)base;
+  *(vint16mf2x6_t*)out = v;
+}
+
+void
+f_vuint16mf2x6_t (void *base, void *out)
+{
+  vuint16mf2x6_t v = *(vuint16mf2x6_t*)base;
+  *(vuint16mf2x6_t*)out = v;
+}
+
+void
+f_vint16mf2x7_t (void *base, void *out)
+{
+  vint16mf2x7_t v = *(vint16mf2x7_t*)base;
+  *(vint16mf2x7_t*)out = v;
+}
+
+void
+f_vuint16mf2x7_t (void *base, void *out)
+{
+  vuint16mf2x7_t v = *(vuint16mf2x7_t*)base;
+  *(vuint16mf2x7_t*)out = v;
+}
+
+void
+f_vint16mf2x8_t (void *base, void *out)
+{
+  vint16mf2x8_t v = *(vint16mf2x8_t*)base;
+  *(vint16mf2x8_t*)out = v;
+}
+
+void
+f_vuint16mf2x8_t (void *base, void *out)
+{
+  vuint16mf2x8_t v = *(vuint16mf2x8_t*)base;
+  *(vuint16mf2x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-times {vsetvli\s+[a-x0-9]+,\s*zero,\s*e16,\s*mf2,\s*t[au],\s*m[au]} 14 } } */
+/* { dg-final { scan-assembler {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vle16\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vse16\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-9.c b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-9.c
new file mode 100644
index 00000000000..055e2d33646
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/tuple-9.c
@@ -0,0 +1,107 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void
+f_vint16m1x2_t (void *base, void *out)
+{
+  vint16m1x2_t v = *(vint16m1x2_t*)base;
+  *(vint16m1x2_t*)out = v;
+}
+
+void
+f_vuint16m1x2_t (void *base, void *out)
+{
+  vuint16m1x2_t v = *(vuint16m1x2_t*)base;
+  *(vuint16m1x2_t*)out = v;
+}
+
+void
+f_vint16m1x3_t (void *base, void *out)
+{
+  vint16m1x3_t v = *(vint16m1x3_t*)base;
+  *(vint16m1x3_t*)out = v;
+}
+
+void
+f_vuint16m1x3_t (void *base, void *out)
+{
+  vuint16m1x3_t v = *(vuint16m1x3_t*)base;
+  *(vuint16m1x3_t*)out = v;
+}
+
+void
+f_vint16m1x4_t (void *base, void *out)
+{
+  vint16m1x4_t v = *(vint16m1x4_t*)base;
+  *(vint16m1x4_t*)out = v;
+}
+
+void
+f_vuint16m1x4_t (void *base, void *out)
+{
+  vuint16m1x4_t v = *(vuint16m1x4_t*)base;
+  *(vuint16m1x4_t*)out = v;
+}
+
+void
+f_vint16m1x5_t (void *base, void *out)
+{
+  vint16m1x5_t v = *(vint16m1x5_t*)base;
+  *(vint16m1x5_t*)out = v;
+}
+
+void
+f_vuint16m1x5_t (void *base, void *out)
+{
+  vuint16m1x5_t v = *(vuint16m1x5_t*)base;
+  *(vuint16m1x5_t*)out = v;
+}
+
+void
+f_vint16m1x6_t (void *base, void *out)
+{
+  vint16m1x6_t v = *(vint16m1x6_t*)base;
+  *(vint16m1x6_t*)out = v;
+}
+
+void
+f_vuint16m1x6_t (void *base, void *out)
+{
+  vuint16m1x6_t v = *(vuint16m1x6_t*)base;
+  *(vuint16m1x6_t*)out = v;
+}
+
+void
+f_vint16m1x7_t (void *base, void *out)
+{
+  vint16m1x7_t v = *(vint16m1x7_t*)base;
+  *(vint16m1x7_t*)out = v;
+}
+
+void
+f_vuint16m1x7_t (void *base, void *out)
+{
+  vuint16m1x7_t v = *(vuint16m1x7_t*)base;
+  *(vuint16m1x7_t*)out = v;
+}
+
+void
+f_vint16m1x8_t (void *base, void *out)
+{
+  vint16m1x8_t v = *(vint16m1x8_t*)base;
+  *(vint16m1x8_t*)out = v;
+}
+
+void
+f_vuint16m1x8_t (void *base, void *out)
+{
+  vuint16m1x8_t v = *(vuint16m1x8_t*)base;
+  *(vuint16m1x8_t*)out = v;
+}
+
+/* { dg-final { scan-assembler-not {srai} } } */
+/* { dg-final { scan-assembler-not {slli} } } */
+/* { dg-final { scan-assembler-times {vl1re16\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
+/* { dg-final { scan-assembler-times {vs1r\.v\tv[0-9]+,0\([a-x0-9]+\)} 70 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-10.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-10.c
new file mode 100644
index 00000000000..fdc28c77426
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-10.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve64f -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;}
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;}
+void f_vint8mf8x3_t () {vint8mf8x3_t t;}
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;}
+void f_vint8mf8x4_t () {vint8mf8x4_t t;}
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;}
+void f_vint8mf8x5_t () {vint8mf8x5_t t;}
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;}
+void f_vint8mf8x6_t () {vint8mf8x6_t t;}
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;}
+void f_vint8mf8x7_t () {vint8mf8x7_t t;}
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;}
+void f_vint8mf8x8_t () {vint8mf8x8_t t;}
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;}
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;}
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;}
+void f_vint16mf4x3_t () {vint16mf4x3_t t;}
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;}
+void f_vint16mf4x4_t () {vint16mf4x4_t t;}
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;}
+void f_vint16mf4x5_t () {vint16mf4x5_t t;}
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;}
+void f_vint16mf4x6_t () {vint16mf4x6_t t;}
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;}
+void f_vint16mf4x7_t () {vint16mf4x7_t t;}
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;}
+void f_vint16mf4x8_t () {vint16mf4x8_t t;}
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;}
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;}
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;}
+void f_vint32mf2x3_t () {vint32mf2x3_t t;}
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;}
+void f_vint32mf2x4_t () {vint32mf2x4_t t;}
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;}
+void f_vint32mf2x5_t () {vint32mf2x5_t t;}
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;}
+void f_vint32mf2x6_t () {vint32mf2x6_t t;}
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;}
+void f_vint32mf2x7_t () {vint32mf2x7_t t;}
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;}
+void f_vint32mf2x8_t () {vint32mf2x8_t t;}
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;}
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;}
+void f_vuint64m1x2_t () {vuint64m1x2_t t;}
+void f_vint64m1x3_t () {vint64m1x3_t t;}
+void f_vuint64m1x3_t () {vuint64m1x3_t t;}
+void f_vint64m1x4_t () {vint64m1x4_t t;}
+void f_vuint64m1x4_t () {vuint64m1x4_t t;}
+void f_vint64m1x5_t () {vint64m1x5_t t;}
+void f_vuint64m1x5_t () {vuint64m1x5_t t;}
+void f_vint64m1x6_t () {vint64m1x6_t t;}
+void f_vuint64m1x6_t () {vuint64m1x6_t t;}
+void f_vint64m1x7_t () {vint64m1x7_t t;}
+void f_vuint64m1x7_t () {vuint64m1x7_t t;}
+void f_vint64m1x8_t () {vint64m1x8_t t;}
+void f_vuint64m1x8_t () {vuint64m1x8_t t;}
+void f_vint64m2x2_t () {vint64m2x2_t t;}
+void f_vuint64m2x2_t () {vuint64m2x2_t t;}
+void f_vint64m2x3_t () {vint64m2x3_t t;}
+void f_vuint64m2x3_t () {vuint64m2x3_t t;}
+void f_vint64m2x4_t () {vint64m2x4_t t;}
+void f_vuint64m2x4_t () {vuint64m2x4_t t;}
+void f_vint64m4x2_t () {vint64m4x2_t t;}
+void f_vuint64m4x2_t () {vuint64m4x2_t t;}
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;}
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;}
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;}
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;}
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;}
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;}
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;}
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;}
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;}
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;}
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;}
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;}
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;}
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;}
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;}
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;}
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;}
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;}
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-11.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-11.c
new file mode 100644
index 00000000000..901d2edcbc5
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-11.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve64d -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;}
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;}
+void f_vint8mf8x3_t () {vint8mf8x3_t t;}
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;}
+void f_vint8mf8x4_t () {vint8mf8x4_t t;}
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;}
+void f_vint8mf8x5_t () {vint8mf8x5_t t;}
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;}
+void f_vint8mf8x6_t () {vint8mf8x6_t t;}
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;}
+void f_vint8mf8x7_t () {vint8mf8x7_t t;}
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;}
+void f_vint8mf8x8_t () {vint8mf8x8_t t;}
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;}
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;}
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;}
+void f_vint16mf4x3_t () {vint16mf4x3_t t;}
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;}
+void f_vint16mf4x4_t () {vint16mf4x4_t t;}
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;}
+void f_vint16mf4x5_t () {vint16mf4x5_t t;}
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;}
+void f_vint16mf4x6_t () {vint16mf4x6_t t;}
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;}
+void f_vint16mf4x7_t () {vint16mf4x7_t t;}
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;}
+void f_vint16mf4x8_t () {vint16mf4x8_t t;}
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;}
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;}
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;}
+void f_vint32mf2x3_t () {vint32mf2x3_t t;}
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;}
+void f_vint32mf2x4_t () {vint32mf2x4_t t;}
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;}
+void f_vint32mf2x5_t () {vint32mf2x5_t t;}
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;}
+void f_vint32mf2x6_t () {vint32mf2x6_t t;}
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;}
+void f_vint32mf2x7_t () {vint32mf2x7_t t;}
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;}
+void f_vint32mf2x8_t () {vint32mf2x8_t t;}
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;}
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;}
+void f_vuint64m1x2_t () {vuint64m1x2_t t;}
+void f_vint64m1x3_t () {vint64m1x3_t t;}
+void f_vuint64m1x3_t () {vuint64m1x3_t t;}
+void f_vint64m1x4_t () {vint64m1x4_t t;}
+void f_vuint64m1x4_t () {vuint64m1x4_t t;}
+void f_vint64m1x5_t () {vint64m1x5_t t;}
+void f_vuint64m1x5_t () {vuint64m1x5_t t;}
+void f_vint64m1x6_t () {vint64m1x6_t t;}
+void f_vuint64m1x6_t () {vuint64m1x6_t t;}
+void f_vint64m1x7_t () {vint64m1x7_t t;}
+void f_vuint64m1x7_t () {vuint64m1x7_t t;}
+void f_vint64m1x8_t () {vint64m1x8_t t;}
+void f_vuint64m1x8_t () {vuint64m1x8_t t;}
+void f_vint64m2x2_t () {vint64m2x2_t t;}
+void f_vuint64m2x2_t () {vuint64m2x2_t t;}
+void f_vint64m2x3_t () {vint64m2x3_t t;}
+void f_vuint64m2x3_t () {vuint64m2x3_t t;}
+void f_vint64m2x4_t () {vint64m2x4_t t;}
+void f_vuint64m2x4_t () {vuint64m2x4_t t;}
+void f_vint64m4x2_t () {vint64m4x2_t t;}
+void f_vuint64m4x2_t () {vuint64m4x2_t t;}
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;}
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;}
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;}
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;}
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;}
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;}
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;}
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;}
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;}
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;}
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;}
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;}
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;}
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;}
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;}
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;}
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;}
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;}
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;}
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;}
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;}
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;}
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;}
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;}
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;}
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;}
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;}
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;}
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-12.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-12.c
new file mode 100644
index 00000000000..332ff7627b6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-12.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32x -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;} /* { dg-error {unknown type name 'vint8mf8x2_t'} } */
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;} /* { dg-error {unknown type name 'vuint8mf8x2_t'} } */
+void f_vint8mf8x3_t () {vint8mf8x3_t t;} /* { dg-error {unknown type name 'vint8mf8x3_t'} } */
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;} /* { dg-error {unknown type name 'vuint8mf8x3_t'} } */
+void f_vint8mf8x4_t () {vint8mf8x4_t t;} /* { dg-error {unknown type name 'vint8mf8x4_t'} } */
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;} /* { dg-error {unknown type name 'vuint8mf8x4_t'} } */
+void f_vint8mf8x5_t () {vint8mf8x5_t t;} /* { dg-error {unknown type name 'vint8mf8x5_t'} } */
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;} /* { dg-error {unknown type name 'vuint8mf8x5_t'} } */
+void f_vint8mf8x6_t () {vint8mf8x6_t t;} /* { dg-error {unknown type name 'vint8mf8x6_t'} } */
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;} /* { dg-error {unknown type name 'vuint8mf8x6_t'} } */
+void f_vint8mf8x7_t () {vint8mf8x7_t t;} /* { dg-error {unknown type name 'vint8mf8x7_t'} } */
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;} /* { dg-error {unknown type name 'vuint8mf8x7_t'} } */
+void f_vint8mf8x8_t () {vint8mf8x8_t t;} /* { dg-error {unknown type name 'vint8mf8x8_t'} } */
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;} /* { dg-error {unknown type name 'vuint8mf8x8_t'} } */
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;} /* { dg-error {unknown type name 'vint16mf4x2_t'} } */
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;} /* { dg-error {unknown type name 'vuint16mf4x2_t'} } */
+void f_vint16mf4x3_t () {vint16mf4x3_t t;} /* { dg-error {unknown type name 'vint16mf4x3_t'} } */
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;} /* { dg-error {unknown type name 'vuint16mf4x3_t'} } */
+void f_vint16mf4x4_t () {vint16mf4x4_t t;} /* { dg-error {unknown type name 'vint16mf4x4_t'} } */
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;} /* { dg-error {unknown type name 'vuint16mf4x4_t'} } */
+void f_vint16mf4x5_t () {vint16mf4x5_t t;} /* { dg-error {unknown type name 'vint16mf4x5_t'} } */
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;} /* { dg-error {unknown type name 'vuint16mf4x5_t'} } */
+void f_vint16mf4x6_t () {vint16mf4x6_t t;} /* { dg-error {unknown type name 'vint16mf4x6_t'} } */
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;} /* { dg-error {unknown type name 'vuint16mf4x6_t'} } */
+void f_vint16mf4x7_t () {vint16mf4x7_t t;} /* { dg-error {unknown type name 'vint16mf4x7_t'} } */
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;} /* { dg-error {unknown type name 'vuint16mf4x7_t'} } */
+void f_vint16mf4x8_t () {vint16mf4x8_t t;} /* { dg-error {unknown type name 'vint16mf4x8_t'} } */
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;} /* { dg-error {unknown type name 'vuint16mf4x8_t'} } */
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;} /* { dg-error {unknown type name 'vint32mf2x2_t'} } */
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;} /* { dg-error {unknown type name 'vuint32mf2x2_t'} } */
+void f_vint32mf2x3_t () {vint32mf2x3_t t;} /* { dg-error {unknown type name 'vint32mf2x3_t'} } */
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;} /* { dg-error {unknown type name 'vuint32mf2x3_t'} } */
+void f_vint32mf2x4_t () {vint32mf2x4_t t;} /* { dg-error {unknown type name 'vint32mf2x4_t'} } */
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;} /* { dg-error {unknown type name 'vuint32mf2x4_t'} } */
+void f_vint32mf2x5_t () {vint32mf2x5_t t;} /* { dg-error {unknown type name 'vint32mf2x5_t'} } */
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;} /* { dg-error {unknown type name 'vuint32mf2x5_t'} } */
+void f_vint32mf2x6_t () {vint32mf2x6_t t;} /* { dg-error {unknown type name 'vint32mf2x6_t'} } */
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;} /* { dg-error {unknown type name 'vuint32mf2x6_t'} } */
+void f_vint32mf2x7_t () {vint32mf2x7_t t;} /* { dg-error {unknown type name 'vint32mf2x7_t'} } */
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;} /* { dg-error {unknown type name 'vuint32mf2x7_t'} } */
+void f_vint32mf2x8_t () {vint32mf2x8_t t;} /* { dg-error {unknown type name 'vint32mf2x8_t'} } */
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;} /* { dg-error {unknown type name 'vuint32mf2x8_t'} } */
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;} /* { dg-error {unknown type name 'vint64m1x2_t'} } */
+void f_vuint64m1x2_t () {vuint64m1x2_t t;} /* { dg-error {unknown type name 'vuint64m1x2_t'} } */
+void f_vint64m1x3_t () {vint64m1x3_t t;} /* { dg-error {unknown type name 'vint64m1x3_t'} } */
+void f_vuint64m1x3_t () {vuint64m1x3_t t;} /* { dg-error {unknown type name 'vuint64m1x3_t'} } */
+void f_vint64m1x4_t () {vint64m1x4_t t;} /* { dg-error {unknown type name 'vint64m1x4_t'} } */
+void f_vuint64m1x4_t () {vuint64m1x4_t t;} /* { dg-error {unknown type name 'vuint64m1x4_t'} } */
+void f_vint64m1x5_t () {vint64m1x5_t t;} /* { dg-error {unknown type name 'vint64m1x5_t'} } */
+void f_vuint64m1x5_t () {vuint64m1x5_t t;} /* { dg-error {unknown type name 'vuint64m1x5_t'} } */
+void f_vint64m1x6_t () {vint64m1x6_t t;} /* { dg-error {unknown type name 'vint64m1x6_t'} } */
+void f_vuint64m1x6_t () {vuint64m1x6_t t;} /* { dg-error {unknown type name 'vuint64m1x6_t'} } */
+void f_vint64m1x7_t () {vint64m1x7_t t;} /* { dg-error {unknown type name 'vint64m1x7_t'} } */
+void f_vuint64m1x7_t () {vuint64m1x7_t t;} /* { dg-error {unknown type name 'vuint64m1x7_t'} } */
+void f_vint64m1x8_t () {vint64m1x8_t t;} /* { dg-error {unknown type name 'vint64m1x8_t'} } */
+void f_vuint64m1x8_t () {vuint64m1x8_t t;} /* { dg-error {unknown type name 'vuint64m1x8_t'} } */
+void f_vint64m2x2_t () {vint64m2x2_t t;} /* { dg-error {unknown type name 'vint64m2x2_t'} } */
+void f_vuint64m2x2_t () {vuint64m2x2_t t;} /* { dg-error {unknown type name 'vuint64m2x2_t'} } */
+void f_vint64m2x3_t () {vint64m2x3_t t;} /* { dg-error {unknown type name 'vint64m2x3_t'} } */
+void f_vuint64m2x3_t () {vuint64m2x3_t t;} /* { dg-error {unknown type name 'vuint64m2x3_t'} } */
+void f_vint64m2x4_t () {vint64m2x4_t t;} /* { dg-error {unknown type name 'vint64m2x4_t'} } */
+void f_vuint64m2x4_t () {vuint64m2x4_t t;} /* { dg-error {unknown type name 'vuint64m2x4_t'} } */
+void f_vint64m4x2_t () {vint64m4x2_t t;} /* { dg-error {unknown type name 'vint64m4x2_t'} } */
+void f_vuint64m4x2_t () {vuint64m4x2_t t;} /* { dg-error {unknown type name 'vuint64m4x2_t'} } */
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;} /* { dg-error {unknown type name 'vfloat32mf2x2_t'} } */
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;} /* { dg-error {unknown type name 'vfloat32mf2x3_t'} } */
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;} /* { dg-error {unknown type name 'vfloat32mf2x4_t'} } */
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;} /* { dg-error {unknown type name 'vfloat32mf2x5_t'} } */
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;} /* { dg-error {unknown type name 'vfloat32mf2x6_t'} } */
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;} /* { dg-error {unknown type name 'vfloat32mf2x7_t'} } */
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;} /* { dg-error {unknown type name 'vfloat32mf2x8_t'} } */
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;} /* { dg-error {unknown type name 'vfloat32m1x2_t'} } */
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;} /* { dg-error {unknown type name 'vfloat32m1x3_t'} } */
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;} /* { dg-error {unknown type name 'vfloat32m1x4_t'} } */
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;} /* { dg-error {unknown type name 'vfloat32m1x5_t'} } */
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;} /* { dg-error {unknown type name 'vfloat32m1x6_t'} } */
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;} /* { dg-error {unknown type name 'vfloat32m1x7_t'} } */
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;} /* { dg-error {unknown type name 'vfloat32m1x8_t'} } */
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;} /* { dg-error {unknown type name 'vfloat32m2x2_t'} } */
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;} /* { dg-error {unknown type name 'vfloat32m2x3_t'} } */
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;} /* { dg-error {unknown type name 'vfloat32m2x4_t'} } */
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;} /* { dg-error {unknown type name 'vfloat32m4x2_t'} } */
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-13.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-13.c
new file mode 100644
index 00000000000..ed180749cb6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-13.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32x_zvl64b -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;}
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;}
+void f_vint8mf8x3_t () {vint8mf8x3_t t;}
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;}
+void f_vint8mf8x4_t () {vint8mf8x4_t t;}
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;} 
+void f_vint8mf8x5_t () {vint8mf8x5_t t;}
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;}
+void f_vint8mf8x6_t () {vint8mf8x6_t t;}
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;}
+void f_vint8mf8x7_t () {vint8mf8x7_t t;}
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;}
+void f_vint8mf8x8_t () {vint8mf8x8_t t;}
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;}
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;}
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;}
+void f_vint16mf4x3_t () {vint16mf4x3_t t;}
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;}
+void f_vint16mf4x4_t () {vint16mf4x4_t t;}
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;}
+void f_vint16mf4x5_t () {vint16mf4x5_t t;}
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;}
+void f_vint16mf4x6_t () {vint16mf4x6_t t;}
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;}
+void f_vint16mf4x7_t () {vint16mf4x7_t t;}
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;}
+void f_vint16mf4x8_t () {vint16mf4x8_t t;}
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;}
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;}
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;}
+void f_vint32mf2x3_t () {vint32mf2x3_t t;}
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;}
+void f_vint32mf2x4_t () {vint32mf2x4_t t;}
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;}
+void f_vint32mf2x5_t () {vint32mf2x5_t t;}
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;}
+void f_vint32mf2x6_t () {vint32mf2x6_t t;}
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;}
+void f_vint32mf2x7_t () {vint32mf2x7_t t;}
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;}
+void f_vint32mf2x8_t () {vint32mf2x8_t t;}
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;}
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;} /* { dg-error {unknown type name 'vint64m1x2_t'} } */
+void f_vuint64m1x2_t () {vuint64m1x2_t t;} /* { dg-error {unknown type name 'vuint64m1x2_t'} } */
+void f_vint64m1x3_t () {vint64m1x3_t t;} /* { dg-error {unknown type name 'vint64m1x3_t'} } */
+void f_vuint64m1x3_t () {vuint64m1x3_t t;} /* { dg-error {unknown type name 'vuint64m1x3_t'} } */
+void f_vint64m1x4_t () {vint64m1x4_t t;} /* { dg-error {unknown type name 'vint64m1x4_t'} } */
+void f_vuint64m1x4_t () {vuint64m1x4_t t;} /* { dg-error {unknown type name 'vuint64m1x4_t'} } */
+void f_vint64m1x5_t () {vint64m1x5_t t;} /* { dg-error {unknown type name 'vint64m1x5_t'} } */
+void f_vuint64m1x5_t () {vuint64m1x5_t t;} /* { dg-error {unknown type name 'vuint64m1x5_t'} } */
+void f_vint64m1x6_t () {vint64m1x6_t t;} /* { dg-error {unknown type name 'vint64m1x6_t'} } */
+void f_vuint64m1x6_t () {vuint64m1x6_t t;} /* { dg-error {unknown type name 'vuint64m1x6_t'} } */
+void f_vint64m1x7_t () {vint64m1x7_t t;} /* { dg-error {unknown type name 'vint64m1x7_t'} } */
+void f_vuint64m1x7_t () {vuint64m1x7_t t;} /* { dg-error {unknown type name 'vuint64m1x7_t'} } */
+void f_vint64m1x8_t () {vint64m1x8_t t;} /* { dg-error {unknown type name 'vint64m1x8_t'} } */
+void f_vuint64m1x8_t () {vuint64m1x8_t t;} /* { dg-error {unknown type name 'vuint64m1x8_t'} } */
+void f_vint64m2x2_t () {vint64m2x2_t t;} /* { dg-error {unknown type name 'vint64m2x2_t'} } */
+void f_vuint64m2x2_t () {vuint64m2x2_t t;} /* { dg-error {unknown type name 'vuint64m2x2_t'} } */
+void f_vint64m2x3_t () {vint64m2x3_t t;} /* { dg-error {unknown type name 'vint64m2x3_t'} } */
+void f_vuint64m2x3_t () {vuint64m2x3_t t;} /* { dg-error {unknown type name 'vuint64m2x3_t'} } */
+void f_vint64m2x4_t () {vint64m2x4_t t;} /* { dg-error {unknown type name 'vint64m2x4_t'} } */
+void f_vuint64m2x4_t () {vuint64m2x4_t t;} /* { dg-error {unknown type name 'vuint64m2x4_t'} } */
+void f_vint64m4x2_t () {vint64m4x2_t t;} /* { dg-error {unknown type name 'vint64m4x2_t'} } */
+void f_vuint64m4x2_t () {vuint64m4x2_t t;} /* { dg-error {unknown type name 'vuint64m4x2_t'} } */
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;} /* { dg-error {unknown type name 'vfloat32mf2x2_t'} } */
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;} /* { dg-error {unknown type name 'vfloat32mf2x3_t'} } */
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;} /* { dg-error {unknown type name 'vfloat32mf2x4_t'} } */
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;} /* { dg-error {unknown type name 'vfloat32mf2x5_t'} } */
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;} /* { dg-error {unknown type name 'vfloat32mf2x6_t'} } */
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;} /* { dg-error {unknown type name 'vfloat32mf2x7_t'} } */
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;} /* { dg-error {unknown type name 'vfloat32mf2x8_t'} } */
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;} /* { dg-error {unknown type name 'vfloat32m1x2_t'} } */
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;} /* { dg-error {unknown type name 'vfloat32m1x3_t'} } */
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;} /* { dg-error {unknown type name 'vfloat32m1x4_t'} } */
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;} /* { dg-error {unknown type name 'vfloat32m1x5_t'} } */
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;} /* { dg-error {unknown type name 'vfloat32m1x6_t'} } */
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;} /* { dg-error {unknown type name 'vfloat32m1x7_t'} } */
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;} /* { dg-error {unknown type name 'vfloat32m1x8_t'} } */
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;} /* { dg-error {unknown type name 'vfloat32m2x2_t'} } */
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;} /* { dg-error {unknown type name 'vfloat32m2x3_t'} } */
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;} /* { dg-error {unknown type name 'vfloat32m2x4_t'} } */
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;} /* { dg-error {unknown type name 'vfloat32m4x2_t'} } */
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-14.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-14.c
new file mode 100644
index 00000000000..70e0989b6e1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-14.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32f -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;} /* { dg-error {unknown type name 'vint8mf8x2_t'} } */
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;} /* { dg-error {unknown type name 'vuint8mf8x2_t'} } */
+void f_vint8mf8x3_t () {vint8mf8x3_t t;} /* { dg-error {unknown type name 'vint8mf8x3_t'} } */
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;} /* { dg-error {unknown type name 'vuint8mf8x3_t'} } */
+void f_vint8mf8x4_t () {vint8mf8x4_t t;} /* { dg-error {unknown type name 'vint8mf8x4_t'} } */
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;} /* { dg-error {unknown type name 'vuint8mf8x4_t'} } */
+void f_vint8mf8x5_t () {vint8mf8x5_t t;} /* { dg-error {unknown type name 'vint8mf8x5_t'} } */
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;} /* { dg-error {unknown type name 'vuint8mf8x5_t'} } */
+void f_vint8mf8x6_t () {vint8mf8x6_t t;} /* { dg-error {unknown type name 'vint8mf8x6_t'} } */
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;} /* { dg-error {unknown type name 'vuint8mf8x6_t'} } */
+void f_vint8mf8x7_t () {vint8mf8x7_t t;} /* { dg-error {unknown type name 'vint8mf8x7_t'} } */
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;} /* { dg-error {unknown type name 'vuint8mf8x7_t'} } */
+void f_vint8mf8x8_t () {vint8mf8x8_t t;} /* { dg-error {unknown type name 'vint8mf8x8_t'} } */
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;} /* { dg-error {unknown type name 'vuint8mf8x8_t'} } */
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;} /* { dg-error {unknown type name 'vint16mf4x2_t'} } */
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;} /* { dg-error {unknown type name 'vuint16mf4x2_t'} } */
+void f_vint16mf4x3_t () {vint16mf4x3_t t;} /* { dg-error {unknown type name 'vint16mf4x3_t'} } */
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;} /* { dg-error {unknown type name 'vuint16mf4x3_t'} } */
+void f_vint16mf4x4_t () {vint16mf4x4_t t;} /* { dg-error {unknown type name 'vint16mf4x4_t'} } */
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;} /* { dg-error {unknown type name 'vuint16mf4x4_t'} } */
+void f_vint16mf4x5_t () {vint16mf4x5_t t;} /* { dg-error {unknown type name 'vint16mf4x5_t'} } */
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;} /* { dg-error {unknown type name 'vuint16mf4x5_t'} } */
+void f_vint16mf4x6_t () {vint16mf4x6_t t;} /* { dg-error {unknown type name 'vint16mf4x6_t'} } */
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;} /* { dg-error {unknown type name 'vuint16mf4x6_t'} } */
+void f_vint16mf4x7_t () {vint16mf4x7_t t;} /* { dg-error {unknown type name 'vint16mf4x7_t'} } */
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;} /* { dg-error {unknown type name 'vuint16mf4x7_t'} } */
+void f_vint16mf4x8_t () {vint16mf4x8_t t;} /* { dg-error {unknown type name 'vint16mf4x8_t'} } */
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;} /* { dg-error {unknown type name 'vuint16mf4x8_t'} } */
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;} /* { dg-error {unknown type name 'vint32mf2x2_t'} } */
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;} /* { dg-error {unknown type name 'vuint32mf2x2_t'} } */
+void f_vint32mf2x3_t () {vint32mf2x3_t t;} /* { dg-error {unknown type name 'vint32mf2x3_t'} } */
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;} /* { dg-error {unknown type name 'vuint32mf2x3_t'} } */
+void f_vint32mf2x4_t () {vint32mf2x4_t t;} /* { dg-error {unknown type name 'vint32mf2x4_t'} } */
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;} /* { dg-error {unknown type name 'vuint32mf2x4_t'} } */
+void f_vint32mf2x5_t () {vint32mf2x5_t t;} /* { dg-error {unknown type name 'vint32mf2x5_t'} } */
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;} /* { dg-error {unknown type name 'vuint32mf2x5_t'} } */
+void f_vint32mf2x6_t () {vint32mf2x6_t t;} /* { dg-error {unknown type name 'vint32mf2x6_t'} } */
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;} /* { dg-error {unknown type name 'vuint32mf2x6_t'} } */
+void f_vint32mf2x7_t () {vint32mf2x7_t t;} /* { dg-error {unknown type name 'vint32mf2x7_t'} } */
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;} /* { dg-error {unknown type name 'vuint32mf2x7_t'} } */
+void f_vint32mf2x8_t () {vint32mf2x8_t t;} /* { dg-error {unknown type name 'vint32mf2x8_t'} } */
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;} /* { dg-error {unknown type name 'vuint32mf2x8_t'} } */
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;} /* { dg-error {unknown type name 'vint64m1x2_t'} } */
+void f_vuint64m1x2_t () {vuint64m1x2_t t;} /* { dg-error {unknown type name 'vuint64m1x2_t'} } */
+void f_vint64m1x3_t () {vint64m1x3_t t;} /* { dg-error {unknown type name 'vint64m1x3_t'} } */
+void f_vuint64m1x3_t () {vuint64m1x3_t t;} /* { dg-error {unknown type name 'vuint64m1x3_t'} } */
+void f_vint64m1x4_t () {vint64m1x4_t t;} /* { dg-error {unknown type name 'vint64m1x4_t'} } */
+void f_vuint64m1x4_t () {vuint64m1x4_t t;} /* { dg-error {unknown type name 'vuint64m1x4_t'} } */
+void f_vint64m1x5_t () {vint64m1x5_t t;} /* { dg-error {unknown type name 'vint64m1x5_t'} } */
+void f_vuint64m1x5_t () {vuint64m1x5_t t;} /* { dg-error {unknown type name 'vuint64m1x5_t'} } */
+void f_vint64m1x6_t () {vint64m1x6_t t;} /* { dg-error {unknown type name 'vint64m1x6_t'} } */
+void f_vuint64m1x6_t () {vuint64m1x6_t t;} /* { dg-error {unknown type name 'vuint64m1x6_t'} } */
+void f_vint64m1x7_t () {vint64m1x7_t t;} /* { dg-error {unknown type name 'vint64m1x7_t'} } */
+void f_vuint64m1x7_t () {vuint64m1x7_t t;} /* { dg-error {unknown type name 'vuint64m1x7_t'} } */
+void f_vint64m1x8_t () {vint64m1x8_t t;} /* { dg-error {unknown type name 'vint64m1x8_t'} } */
+void f_vuint64m1x8_t () {vuint64m1x8_t t;} /* { dg-error {unknown type name 'vuint64m1x8_t'} } */
+void f_vint64m2x2_t () {vint64m2x2_t t;} /* { dg-error {unknown type name 'vint64m2x2_t'} } */
+void f_vuint64m2x2_t () {vuint64m2x2_t t;} /* { dg-error {unknown type name 'vuint64m2x2_t'} } */
+void f_vint64m2x3_t () {vint64m2x3_t t;} /* { dg-error {unknown type name 'vint64m2x3_t'} } */
+void f_vuint64m2x3_t () {vuint64m2x3_t t;} /* { dg-error {unknown type name 'vuint64m2x3_t'} } */
+void f_vint64m2x4_t () {vint64m2x4_t t;} /* { dg-error {unknown type name 'vint64m2x4_t'} } */
+void f_vuint64m2x4_t () {vuint64m2x4_t t;} /* { dg-error {unknown type name 'vuint64m2x4_t'} } */
+void f_vint64m4x2_t () {vint64m4x2_t t;} /* { dg-error {unknown type name 'vint64m4x2_t'} } */
+void f_vuint64m4x2_t () {vuint64m4x2_t t;} /* { dg-error {unknown type name 'vuint64m4x2_t'} } */
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;} /* { dg-error {unknown type name 'vfloat32mf2x2_t'} } */
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;} /* { dg-error {unknown type name 'vfloat32mf2x3_t'} } */
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;} /* { dg-error {unknown type name 'vfloat32mf2x4_t'} } */
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;} /* { dg-error {unknown type name 'vfloat32mf2x5_t'} } */
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;} /* { dg-error {unknown type name 'vfloat32mf2x6_t'} } */
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;} /* { dg-error {unknown type name 'vfloat32mf2x7_t'} } */
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;} /* { dg-error {unknown type name 'vfloat32mf2x8_t'} } */
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;}
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;}
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;}
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;}
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;}
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;}
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;}
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;}
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;}
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;}
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;}
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-15.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-15.c
new file mode 100644
index 00000000000..2a615f80816
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-15.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve32f_zvl64b -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;}
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;}
+void f_vint8mf8x3_t () {vint8mf8x3_t t;}
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;}
+void f_vint8mf8x4_t () {vint8mf8x4_t t;}
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;} 
+void f_vint8mf8x5_t () {vint8mf8x5_t t;}
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;}
+void f_vint8mf8x6_t () {vint8mf8x6_t t;}
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;}
+void f_vint8mf8x7_t () {vint8mf8x7_t t;}
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;}
+void f_vint8mf8x8_t () {vint8mf8x8_t t;}
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;}
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;}
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;}
+void f_vint16mf4x3_t () {vint16mf4x3_t t;}
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;}
+void f_vint16mf4x4_t () {vint16mf4x4_t t;}
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;}
+void f_vint16mf4x5_t () {vint16mf4x5_t t;}
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;}
+void f_vint16mf4x6_t () {vint16mf4x6_t t;}
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;}
+void f_vint16mf4x7_t () {vint16mf4x7_t t;}
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;}
+void f_vint16mf4x8_t () {vint16mf4x8_t t;}
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;}
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;}
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;}
+void f_vint32mf2x3_t () {vint32mf2x3_t t;}
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;}
+void f_vint32mf2x4_t () {vint32mf2x4_t t;}
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;}
+void f_vint32mf2x5_t () {vint32mf2x5_t t;}
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;}
+void f_vint32mf2x6_t () {vint32mf2x6_t t;}
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;}
+void f_vint32mf2x7_t () {vint32mf2x7_t t;}
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;}
+void f_vint32mf2x8_t () {vint32mf2x8_t t;}
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;}
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;} /* { dg-error {unknown type name 'vint64m1x2_t'} } */
+void f_vuint64m1x2_t () {vuint64m1x2_t t;} /* { dg-error {unknown type name 'vuint64m1x2_t'} } */
+void f_vint64m1x3_t () {vint64m1x3_t t;} /* { dg-error {unknown type name 'vint64m1x3_t'} } */
+void f_vuint64m1x3_t () {vuint64m1x3_t t;} /* { dg-error {unknown type name 'vuint64m1x3_t'} } */
+void f_vint64m1x4_t () {vint64m1x4_t t;} /* { dg-error {unknown type name 'vint64m1x4_t'} } */
+void f_vuint64m1x4_t () {vuint64m1x4_t t;} /* { dg-error {unknown type name 'vuint64m1x4_t'} } */
+void f_vint64m1x5_t () {vint64m1x5_t t;} /* { dg-error {unknown type name 'vint64m1x5_t'} } */
+void f_vuint64m1x5_t () {vuint64m1x5_t t;} /* { dg-error {unknown type name 'vuint64m1x5_t'} } */
+void f_vint64m1x6_t () {vint64m1x6_t t;} /* { dg-error {unknown type name 'vint64m1x6_t'} } */
+void f_vuint64m1x6_t () {vuint64m1x6_t t;} /* { dg-error {unknown type name 'vuint64m1x6_t'} } */
+void f_vint64m1x7_t () {vint64m1x7_t t;} /* { dg-error {unknown type name 'vint64m1x7_t'} } */
+void f_vuint64m1x7_t () {vuint64m1x7_t t;} /* { dg-error {unknown type name 'vuint64m1x7_t'} } */
+void f_vint64m1x8_t () {vint64m1x8_t t;} /* { dg-error {unknown type name 'vint64m1x8_t'} } */
+void f_vuint64m1x8_t () {vuint64m1x8_t t;} /* { dg-error {unknown type name 'vuint64m1x8_t'} } */
+void f_vint64m2x2_t () {vint64m2x2_t t;} /* { dg-error {unknown type name 'vint64m2x2_t'} } */
+void f_vuint64m2x2_t () {vuint64m2x2_t t;} /* { dg-error {unknown type name 'vuint64m2x2_t'} } */
+void f_vint64m2x3_t () {vint64m2x3_t t;} /* { dg-error {unknown type name 'vint64m2x3_t'} } */
+void f_vuint64m2x3_t () {vuint64m2x3_t t;} /* { dg-error {unknown type name 'vuint64m2x3_t'} } */
+void f_vint64m2x4_t () {vint64m2x4_t t;} /* { dg-error {unknown type name 'vint64m2x4_t'} } */
+void f_vuint64m2x4_t () {vuint64m2x4_t t;} /* { dg-error {unknown type name 'vuint64m2x4_t'} } */
+void f_vint64m4x2_t () {vint64m4x2_t t;} /* { dg-error {unknown type name 'vint64m4x2_t'} } */
+void f_vuint64m4x2_t () {vuint64m4x2_t t;} /* { dg-error {unknown type name 'vuint64m4x2_t'} } */
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;}
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;}
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;}
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;}
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;}
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;}
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;}
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;}
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;}
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;}
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;}
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;}
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;}
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;}
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;}
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;}
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;}
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;}
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-7.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-7.c
new file mode 100644
index 00000000000..2172a5c7c79
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-7.c
@@ -0,0 +1,204 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;} /* { dg-error {unknown type name 'vint8mf8x2_t'} } */
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;} /* { dg-error {unknown type name 'vuint8mf8x2_t'} } */
+void f_vint8mf8x3_t () {vint8mf8x3_t t;} /* { dg-error {unknown type name 'vint8mf8x3_t'} } */
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;} /* { dg-error {unknown type name 'vuint8mf8x3_t'} } */
+void f_vint8mf8x4_t () {vint8mf8x4_t t;} /* { dg-error {unknown type name 'vint8mf8x4_t'} } */
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;} /* { dg-error {unknown type name 'vuint8mf8x4_t'} } */
+void f_vint8mf8x5_t () {vint8mf8x5_t t;} /* { dg-error {unknown type name 'vint8mf8x5_t'} } */
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;} /* { dg-error {unknown type name 'vuint8mf8x5_t'} } */
+void f_vint8mf8x6_t () {vint8mf8x6_t t;} /* { dg-error {unknown type name 'vint8mf8x6_t'} } */
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;} /* { dg-error {unknown type name 'vuint8mf8x6_t'} } */
+void f_vint8mf8x7_t () {vint8mf8x7_t t;} /* { dg-error {unknown type name 'vint8mf8x7_t'} } */
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;} /* { dg-error {unknown type name 'vuint8mf8x7_t'} } */
+void f_vint8mf8x8_t () {vint8mf8x8_t t;} /* { dg-error {unknown type name 'vint8mf8x8_t'} } */
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;} /* { dg-error {unknown type name 'vuint8mf8x8_t'} } */
+void f_vint8mf4x2_t () {vint8mf4x2_t t;} /* { dg-error {unknown type name 'vint8mf4x2_t'} } */
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;} /* { dg-error {unknown type name 'vuint8mf4x2_t'} } */
+void f_vint8mf4x3_t () {vint8mf4x3_t t;} /* { dg-error {unknown type name 'vint8mf4x3_t'} } */
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;} /* { dg-error {unknown type name 'vuint8mf4x3_t'} } */
+void f_vint8mf4x4_t () {vint8mf4x4_t t;} /* { dg-error {unknown type name 'vint8mf4x4_t'} } */
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;} /* { dg-error {unknown type name 'vuint8mf4x4_t'} } */
+void f_vint8mf4x5_t () {vint8mf4x5_t t;} /* { dg-error {unknown type name 'vint8mf4x5_t'} } */
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;} /* { dg-error {unknown type name 'vuint8mf4x5_t'} } */
+void f_vint8mf4x6_t () {vint8mf4x6_t t;} /* { dg-error {unknown type name 'vint8mf4x6_t'} } */
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;} /* { dg-error {unknown type name 'vuint8mf4x6_t'} } */
+void f_vint8mf4x7_t () {vint8mf4x7_t t;} /* { dg-error {unknown type name 'vint8mf4x7_t'} } */
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;} /* { dg-error {unknown type name 'vuint8mf4x7_t'} } */
+void f_vint8mf4x8_t () {vint8mf4x8_t t;} /* { dg-error {unknown type name 'vint8mf4x8_t'} } */
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;} /* { dg-error {unknown type name 'vuint8mf4x8_t'} } */
+void f_vint8mf2x2_t () {vint8mf2x2_t t;} /* { dg-error {unknown type name 'vint8mf2x2_t'} } */
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;} /* { dg-error {unknown type name 'vuint8mf2x2_t'} } */
+void f_vint8mf2x3_t () {vint8mf2x3_t t;} /* { dg-error {unknown type name 'vint8mf2x3_t'} } */
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;} /* { dg-error {unknown type name 'vuint8mf2x3_t'} } */
+void f_vint8mf2x4_t () {vint8mf2x4_t t;} /* { dg-error {unknown type name 'vint8mf2x4_t'} } */
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;} /* { dg-error {unknown type name 'vuint8mf2x4_t'} } */
+void f_vint8mf2x5_t () {vint8mf2x5_t t;} /* { dg-error {unknown type name 'vint8mf2x5_t'} } */
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;} /* { dg-error {unknown type name 'vuint8mf2x5_t'} } */
+void f_vint8mf2x6_t () {vint8mf2x6_t t;} /* { dg-error {unknown type name 'vint8mf2x6_t'} } */
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;} /* { dg-error {unknown type name 'vuint8mf2x6_t'} } */
+void f_vint8mf2x7_t () {vint8mf2x7_t t;} /* { dg-error {unknown type name 'vint8mf2x7_t'} } */
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;} /* { dg-error {unknown type name 'vuint8mf2x7_t'} } */
+void f_vint8mf2x8_t () {vint8mf2x8_t t;} /* { dg-error {unknown type name 'vint8mf2x8_t'} } */
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;} /* { dg-error {unknown type name 'vuint8mf2x8_t'} } */
+void f_vint8m1x2_t () {vint8m1x2_t t;} /* { dg-error {unknown type name 'vint8m1x2_t'} } */
+void f_vuint8m1x2_t () {vuint8m1x2_t t;} /* { dg-error {unknown type name 'vuint8m1x2_t'} } */
+void f_vint8m1x3_t () {vint8m1x3_t t;} /* { dg-error {unknown type name 'vint8m1x3_t'} } */
+void f_vuint8m1x3_t () {vuint8m1x3_t t;} /* { dg-error {unknown type name 'vuint8m1x3_t'} } */
+void f_vint8m1x4_t () {vint8m1x4_t t;} /* { dg-error {unknown type name 'vint8m1x4_t'} } */
+void f_vuint8m1x4_t () {vuint8m1x4_t t;} /* { dg-error {unknown type name 'vuint8m1x4_t'} } */
+void f_vint8m1x5_t () {vint8m1x5_t t;} /* { dg-error {unknown type name 'vint8m1x5_t'} } */
+void f_vuint8m1x5_t () {vuint8m1x5_t t;} /* { dg-error {unknown type name 'vuint8m1x5_t'} } */
+void f_vint8m1x6_t () {vint8m1x6_t t;} /* { dg-error {unknown type name 'vint8m1x6_t'} } */
+void f_vuint8m1x6_t () {vuint8m1x6_t t;} /* { dg-error {unknown type name 'vuint8m1x6_t'} } */
+void f_vint8m1x7_t () {vint8m1x7_t t;} /* { dg-error {unknown type name 'vint8m1x7_t'} } */
+void f_vuint8m1x7_t () {vuint8m1x7_t t;} /* { dg-error {unknown type name 'vuint8m1x7_t'} } */
+void f_vint8m1x8_t () {vint8m1x8_t t;} /* { dg-error {unknown type name 'vint8m1x8_t'} } */
+void f_vuint8m1x8_t () {vuint8m1x8_t t;} /* { dg-error {unknown type name 'vuint8m1x8_t'} } */
+void f_vint8m2x2_t () {vint8m2x2_t t;} /* { dg-error {unknown type name 'vint8m2x2_t'} } */
+void f_vuint8m2x2_t () {vuint8m2x2_t t;} /* { dg-error {unknown type name 'vuint8m2x2_t'} } */
+void f_vint8m2x3_t () {vint8m2x3_t t;} /* { dg-error {unknown type name 'vint8m2x3_t'} } */
+void f_vuint8m2x3_t () {vuint8m2x3_t t;} /* { dg-error {unknown type name 'vuint8m2x3_t'} } */
+void f_vint8m2x4_t () {vint8m2x4_t t;} /* { dg-error {unknown type name 'vint8m2x4_t'} } */
+void f_vuint8m2x4_t () {vuint8m2x4_t t;} /* { dg-error {unknown type name 'vuint8m2x4_t'} } */
+void f_vint8m4x2_t () {vint8m4x2_t t;} /* { dg-error {unknown type name 'vint8m4x2_t'} } */
+void f_vuint8m4x2_t () {vuint8m4x2_t t;} /* { dg-error {unknown type name 'vuint8m4x2_t'} } */
+void f_vint16mf4x2_t () {vint16mf4x2_t t;} /* { dg-error {unknown type name 'vint16mf4x2_t'} } */
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;} /* { dg-error {unknown type name 'vuint16mf4x2_t'} } */
+void f_vint16mf4x3_t () {vint16mf4x3_t t;} /* { dg-error {unknown type name 'vint16mf4x3_t'} } */
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;} /* { dg-error {unknown type name 'vuint16mf4x3_t'} } */
+void f_vint16mf4x4_t () {vint16mf4x4_t t;} /* { dg-error {unknown type name 'vint16mf4x4_t'} } */
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;} /* { dg-error {unknown type name 'vuint16mf4x4_t'} } */
+void f_vint16mf4x5_t () {vint16mf4x5_t t;} /* { dg-error {unknown type name 'vint16mf4x5_t'} } */
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;} /* { dg-error {unknown type name 'vuint16mf4x5_t'} } */
+void f_vint16mf4x6_t () {vint16mf4x6_t t;} /* { dg-error {unknown type name 'vint16mf4x6_t'} } */
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;} /* { dg-error {unknown type name 'vuint16mf4x6_t'} } */
+void f_vint16mf4x7_t () {vint16mf4x7_t t;} /* { dg-error {unknown type name 'vint16mf4x7_t'} } */
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;} /* { dg-error {unknown type name 'vuint16mf4x7_t'} } */
+void f_vint16mf4x8_t () {vint16mf4x8_t t;} /* { dg-error {unknown type name 'vint16mf4x8_t'} } */
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;} /* { dg-error {unknown type name 'vuint16mf4x8_t'} } */
+void f_vint16mf2x2_t () {vint16mf2x2_t t;} /* { dg-error {unknown type name 'vint16mf2x2_t'} } */
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;} /* { dg-error {unknown type name 'vuint16mf2x2_t'} } */
+void f_vint16mf2x3_t () {vint16mf2x3_t t;} /* { dg-error {unknown type name 'vint16mf2x3_t'} } */
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;} /* { dg-error {unknown type name 'vuint16mf2x3_t'} } */
+void f_vint16mf2x4_t () {vint16mf2x4_t t;} /* { dg-error {unknown type name 'vint16mf2x4_t'} } */
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;} /* { dg-error {unknown type name 'vuint16mf2x4_t'} } */
+void f_vint16mf2x5_t () {vint16mf2x5_t t;} /* { dg-error {unknown type name 'vint16mf2x5_t'} } */
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;} /* { dg-error {unknown type name 'vuint16mf2x5_t'} } */
+void f_vint16mf2x6_t () {vint16mf2x6_t t;} /* { dg-error {unknown type name 'vint16mf2x6_t'} } */
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;} /* { dg-error {unknown type name 'vuint16mf2x6_t'} } */
+void f_vint16mf2x7_t () {vint16mf2x7_t t;} /* { dg-error {unknown type name 'vint16mf2x7_t'} } */
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;} /* { dg-error {unknown type name 'vuint16mf2x7_t'} } */
+void f_vint16mf2x8_t () {vint16mf2x8_t t;} /* { dg-error {unknown type name 'vint16mf2x8_t'} } */
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;} /* { dg-error {unknown type name 'vuint16mf2x8_t'} } */
+void f_vint16m1x2_t () {vint16m1x2_t t;} /* { dg-error {unknown type name 'vint16m1x2_t'} } */
+void f_vuint16m1x2_t () {vuint16m1x2_t t;} /* { dg-error {unknown type name 'vuint16m1x2_t'} } */
+void f_vint16m1x3_t () {vint16m1x3_t t;} /* { dg-error {unknown type name 'vint16m1x3_t'} } */
+void f_vuint16m1x3_t () {vuint16m1x3_t t;} /* { dg-error {unknown type name 'vuint16m1x3_t'} } */
+void f_vint16m1x4_t () {vint16m1x4_t t;} /* { dg-error {unknown type name 'vint16m1x4_t'} } */
+void f_vuint16m1x4_t () {vuint16m1x4_t t;} /* { dg-error {unknown type name 'vuint16m1x4_t'} } */
+void f_vint16m1x5_t () {vint16m1x5_t t;} /* { dg-error {unknown type name 'vint16m1x5_t'} } */
+void f_vuint16m1x5_t () {vuint16m1x5_t t;} /* { dg-error {unknown type name 'vuint16m1x5_t'} } */
+void f_vint16m1x6_t () {vint16m1x6_t t;} /* { dg-error {unknown type name 'vint16m1x6_t'} } */
+void f_vuint16m1x6_t () {vuint16m1x6_t t;} /* { dg-error {unknown type name 'vuint16m1x6_t'} } */
+void f_vint16m1x7_t () {vint16m1x7_t t;} /* { dg-error {unknown type name 'vint16m1x7_t'} } */
+void f_vuint16m1x7_t () {vuint16m1x7_t t;} /* { dg-error {unknown type name 'vuint16m1x7_t'} } */
+void f_vint16m1x8_t () {vint16m1x8_t t;} /* { dg-error {unknown type name 'vint16m1x8_t'} } */
+void f_vuint16m1x8_t () {vuint16m1x8_t t;} /* { dg-error {unknown type name 'vuint16m1x8_t'} } */
+void f_vint16m2x2_t () {vint16m2x2_t t;} /* { dg-error {unknown type name 'vint16m2x2_t'} } */
+void f_vuint16m2x2_t () {vuint16m2x2_t t;} /* { dg-error {unknown type name 'vuint16m2x2_t'} } */
+void f_vint16m2x3_t () {vint16m2x3_t t;} /* { dg-error {unknown type name 'vint16m2x3_t'} } */
+void f_vuint16m2x3_t () {vuint16m2x3_t t;} /* { dg-error {unknown type name 'vuint16m2x3_t'} } */
+void f_vint16m2x4_t () {vint16m2x4_t t;} /* { dg-error {unknown type name 'vint16m2x4_t'} } */
+void f_vuint16m2x4_t () {vuint16m2x4_t t;} /* { dg-error {unknown type name 'vuint16m2x4_t'} } */
+void f_vint16m4x2_t () {vint16m4x2_t t;} /* { dg-error {unknown type name 'vint16m4x2_t'} } */
+void f_vuint16m4x2_t () {vuint16m4x2_t t;} /* { dg-error {unknown type name 'vuint16m4x2_t'} } */
+void f_vint32mf2x2_t () {vint32mf2x2_t t;} /* { dg-error {unknown type name 'vint32mf2x2_t'} } */
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;} /* { dg-error {unknown type name 'vuint32mf2x2_t'} } */
+void f_vint32mf2x3_t () {vint32mf2x3_t t;} /* { dg-error {unknown type name 'vint32mf2x3_t'} } */
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;} /* { dg-error {unknown type name 'vuint32mf2x3_t'} } */
+void f_vint32mf2x4_t () {vint32mf2x4_t t;} /* { dg-error {unknown type name 'vint32mf2x4_t'} } */
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;} /* { dg-error {unknown type name 'vuint32mf2x4_t'} } */
+void f_vint32mf2x5_t () {vint32mf2x5_t t;} /* { dg-error {unknown type name 'vint32mf2x5_t'} } */
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;} /* { dg-error {unknown type name 'vuint32mf2x5_t'} } */
+void f_vint32mf2x6_t () {vint32mf2x6_t t;} /* { dg-error {unknown type name 'vint32mf2x6_t'} } */
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;} /* { dg-error {unknown type name 'vuint32mf2x6_t'} } */
+void f_vint32mf2x7_t () {vint32mf2x7_t t;} /* { dg-error {unknown type name 'vint32mf2x7_t'} } */
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;} /* { dg-error {unknown type name 'vuint32mf2x7_t'} } */
+void f_vint32mf2x8_t () {vint32mf2x8_t t;} /* { dg-error {unknown type name 'vint32mf2x8_t'} } */
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;} /* { dg-error {unknown type name 'vuint32mf2x8_t'} } */
+void f_vint32m1x2_t () {vint32m1x2_t t;} /* { dg-error {unknown type name 'vint32m1x2_t'} } */
+void f_vuint32m1x2_t () {vuint32m1x2_t t;} /* { dg-error {unknown type name 'vuint32m1x2_t'} } */
+void f_vint32m1x3_t () {vint32m1x3_t t;} /* { dg-error {unknown type name 'vint32m1x3_t'} } */
+void f_vuint32m1x3_t () {vuint32m1x3_t t;} /* { dg-error {unknown type name 'vuint32m1x3_t'} } */
+void f_vint32m1x4_t () {vint32m1x4_t t;} /* { dg-error {unknown type name 'vint32m1x4_t'} } */
+void f_vuint32m1x4_t () {vuint32m1x4_t t;} /* { dg-error {unknown type name 'vuint32m1x4_t'} } */
+void f_vint32m1x5_t () {vint32m1x5_t t;} /* { dg-error {unknown type name 'vint32m1x5_t'} } */
+void f_vuint32m1x5_t () {vuint32m1x5_t t;} /* { dg-error {unknown type name 'vuint32m1x5_t'} } */
+void f_vint32m1x6_t () {vint32m1x6_t t;} /* { dg-error {unknown type name 'vint32m1x6_t'} } */
+void f_vuint32m1x6_t () {vuint32m1x6_t t;} /* { dg-error {unknown type name 'vuint32m1x6_t'} } */
+void f_vint32m1x7_t () {vint32m1x7_t t;} /* { dg-error {unknown type name 'vint32m1x7_t'} } */
+void f_vuint32m1x7_t () {vuint32m1x7_t t;} /* { dg-error {unknown type name 'vuint32m1x7_t'} } */
+void f_vint32m1x8_t () {vint32m1x8_t t;} /* { dg-error {unknown type name 'vint32m1x8_t'} } */
+void f_vuint32m1x8_t () {vuint32m1x8_t t;} /* { dg-error {unknown type name 'vuint32m1x8_t'} } */
+void f_vint32m2x2_t () {vint32m2x2_t t;} /* { dg-error {unknown type name 'vint32m2x2_t'} } */
+void f_vuint32m2x2_t () {vuint32m2x2_t t;} /* { dg-error {unknown type name 'vuint32m2x2_t'} } */
+void f_vint32m2x3_t () {vint32m2x3_t t;} /* { dg-error {unknown type name 'vint32m2x3_t'} } */
+void f_vuint32m2x3_t () {vuint32m2x3_t t;} /* { dg-error {unknown type name 'vuint32m2x3_t'} } */
+void f_vint32m2x4_t () {vint32m2x4_t t;} /* { dg-error {unknown type name 'vint32m2x4_t'} } */
+void f_vuint32m2x4_t () {vuint32m2x4_t t;} /* { dg-error {unknown type name 'vuint32m2x4_t'} } */
+void f_vint32m4x2_t () {vint32m4x2_t t;} /* { dg-error {unknown type name 'vint32m4x2_t'} } */
+void f_vuint32m4x2_t () {vuint32m4x2_t t;} /* { dg-error {unknown type name 'vuint32m4x2_t'} } */
+void f_vint64m1x2_t () {vint64m1x2_t t;} /* { dg-error {unknown type name 'vint64m1x2_t'} } */
+void f_vuint64m1x2_t () {vuint64m1x2_t t;} /* { dg-error {unknown type name 'vuint64m1x2_t'} } */
+void f_vint64m1x3_t () {vint64m1x3_t t;} /* { dg-error {unknown type name 'vint64m1x3_t'} } */
+void f_vuint64m1x3_t () {vuint64m1x3_t t;} /* { dg-error {unknown type name 'vuint64m1x3_t'} } */
+void f_vint64m1x4_t () {vint64m1x4_t t;} /* { dg-error {unknown type name 'vint64m1x4_t'} } */
+void f_vuint64m1x4_t () {vuint64m1x4_t t;} /* { dg-error {unknown type name 'vuint64m1x4_t'} } */
+void f_vint64m1x5_t () {vint64m1x5_t t;} /* { dg-error {unknown type name 'vint64m1x5_t'} } */
+void f_vuint64m1x5_t () {vuint64m1x5_t t;} /* { dg-error {unknown type name 'vuint64m1x5_t'} } */
+void f_vint64m1x6_t () {vint64m1x6_t t;} /* { dg-error {unknown type name 'vint64m1x6_t'} } */
+void f_vuint64m1x6_t () {vuint64m1x6_t t;} /* { dg-error {unknown type name 'vuint64m1x6_t'} } */
+void f_vint64m1x7_t () {vint64m1x7_t t;} /* { dg-error {unknown type name 'vint64m1x7_t'} } */
+void f_vuint64m1x7_t () {vuint64m1x7_t t;} /* { dg-error {unknown type name 'vuint64m1x7_t'} } */
+void f_vint64m1x8_t () {vint64m1x8_t t;} /* { dg-error {unknown type name 'vint64m1x8_t'} } */
+void f_vuint64m1x8_t () {vuint64m1x8_t t;} /* { dg-error {unknown type name 'vuint64m1x8_t'} } */
+void f_vint64m2x2_t () {vint64m2x2_t t;} /* { dg-error {unknown type name 'vint64m2x2_t'} } */
+void f_vuint64m2x2_t () {vuint64m2x2_t t;} /* { dg-error {unknown type name 'vuint64m2x2_t'} } */
+void f_vint64m2x3_t () {vint64m2x3_t t;} /* { dg-error {unknown type name 'vint64m2x3_t'} } */
+void f_vuint64m2x3_t () {vuint64m2x3_t t;} /* { dg-error {unknown type name 'vuint64m2x3_t'} } */
+void f_vint64m2x4_t () {vint64m2x4_t t;} /* { dg-error {unknown type name 'vint64m2x4_t'} } */
+void f_vuint64m2x4_t () {vuint64m2x4_t t;} /* { dg-error {unknown type name 'vuint64m2x4_t'} } */
+void f_vint64m4x2_t () {vint64m4x2_t t;} /* { dg-error {unknown type name 'vint64m4x2_t'} } */
+void f_vuint64m4x2_t () {vuint64m4x2_t t;} /* { dg-error {unknown type name 'vuint64m4x2_t'} } */
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;} /* { dg-error {unknown type name 'vfloat32mf2x2_t'} } */
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;} /* { dg-error {unknown type name 'vfloat32mf2x3_t'} } */
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;} /* { dg-error {unknown type name 'vfloat32mf2x4_t'} } */
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;} /* { dg-error {unknown type name 'vfloat32mf2x5_t'} } */
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;} /* { dg-error {unknown type name 'vfloat32mf2x6_t'} } */
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;} /* { dg-error {unknown type name 'vfloat32mf2x7_t'} } */
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;} /* { dg-error {unknown type name 'vfloat32mf2x8_t'} } */
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;} /* { dg-error {unknown type name 'vfloat32m1x2_t'} } */
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;} /* { dg-error {unknown type name 'vfloat32m1x3_t'} } */
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;} /* { dg-error {unknown type name 'vfloat32m1x4_t'} } */
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;} /* { dg-error {unknown type name 'vfloat32m1x5_t'} } */
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;} /* { dg-error {unknown type name 'vfloat32m1x6_t'} } */
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;} /* { dg-error {unknown type name 'vfloat32m1x7_t'} } */
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;} /* { dg-error {unknown type name 'vfloat32m1x8_t'} } */
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;} /* { dg-error {unknown type name 'vfloat32m2x2_t'} } */
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;} /* { dg-error {unknown type name 'vfloat32m2x3_t'} } */
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;} /* { dg-error {unknown type name 'vfloat32m2x4_t'} } */
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;} /* { dg-error {unknown type name 'vfloat32m4x2_t'} } */
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-8.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-8.c
new file mode 100644
index 00000000000..b666c4a1080
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-8.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gcv -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;}
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;}
+void f_vint8mf8x3_t () {vint8mf8x3_t t;}
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;}
+void f_vint8mf8x4_t () {vint8mf8x4_t t;}
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;}
+void f_vint8mf8x5_t () {vint8mf8x5_t t;}
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;}
+void f_vint8mf8x6_t () {vint8mf8x6_t t;}
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;}
+void f_vint8mf8x7_t () {vint8mf8x7_t t;}
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;}
+void f_vint8mf8x8_t () {vint8mf8x8_t t;}
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;}
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;}
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;}
+void f_vint16mf4x3_t () {vint16mf4x3_t t;}
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;}
+void f_vint16mf4x4_t () {vint16mf4x4_t t;}
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;}
+void f_vint16mf4x5_t () {vint16mf4x5_t t;}
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;}
+void f_vint16mf4x6_t () {vint16mf4x6_t t;}
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;}
+void f_vint16mf4x7_t () {vint16mf4x7_t t;}
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;}
+void f_vint16mf4x8_t () {vint16mf4x8_t t;}
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;}
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;}
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;}
+void f_vint32mf2x3_t () {vint32mf2x3_t t;}
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;}
+void f_vint32mf2x4_t () {vint32mf2x4_t t;}
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;}
+void f_vint32mf2x5_t () {vint32mf2x5_t t;}
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;}
+void f_vint32mf2x6_t () {vint32mf2x6_t t;}
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;}
+void f_vint32mf2x7_t () {vint32mf2x7_t t;}
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;}
+void f_vint32mf2x8_t () {vint32mf2x8_t t;}
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;}
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;}
+void f_vuint64m1x2_t () {vuint64m1x2_t t;}
+void f_vint64m1x3_t () {vint64m1x3_t t;}
+void f_vuint64m1x3_t () {vuint64m1x3_t t;}
+void f_vint64m1x4_t () {vint64m1x4_t t;}
+void f_vuint64m1x4_t () {vuint64m1x4_t t;}
+void f_vint64m1x5_t () {vint64m1x5_t t;}
+void f_vuint64m1x5_t () {vuint64m1x5_t t;}
+void f_vint64m1x6_t () {vint64m1x6_t t;}
+void f_vuint64m1x6_t () {vuint64m1x6_t t;}
+void f_vint64m1x7_t () {vint64m1x7_t t;}
+void f_vuint64m1x7_t () {vuint64m1x7_t t;}
+void f_vint64m1x8_t () {vint64m1x8_t t;}
+void f_vuint64m1x8_t () {vuint64m1x8_t t;}
+void f_vint64m2x2_t () {vint64m2x2_t t;}
+void f_vuint64m2x2_t () {vuint64m2x2_t t;}
+void f_vint64m2x3_t () {vint64m2x3_t t;}
+void f_vuint64m2x3_t () {vuint64m2x3_t t;}
+void f_vint64m2x4_t () {vint64m2x4_t t;}
+void f_vuint64m2x4_t () {vuint64m2x4_t t;}
+void f_vint64m4x2_t () {vint64m4x2_t t;}
+void f_vuint64m4x2_t () {vuint64m4x2_t t;}
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;}
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;}
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;}
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;}
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;}
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;}
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;}
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;}
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;}
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;}
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;}
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;}
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;}
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;}
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;}
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;}
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;}
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;}
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;}
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;}
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;}
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;}
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;}
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;}
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;}
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;}
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;}
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;}
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/user-9.c b/gcc/testsuite/gcc.target/riscv/rvv/base/user-9.c
new file mode 100644
index 00000000000..98a7d391d4e
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/base/user-9.c
@@ -0,0 +1,206 @@
+/* { dg-do compile } */
+/* { dg-options "-O3 -march=rv32gc_zve64x -mabi=ilp32d" } */
+
+#include "riscv_vector.h"
+
+void f_vint8mf8x2_t () {vint8mf8x2_t t;}
+void f_vuint8mf8x2_t () {vuint8mf8x2_t t;}
+void f_vint8mf8x3_t () {vint8mf8x3_t t;}
+void f_vuint8mf8x3_t () {vuint8mf8x3_t t;}
+void f_vint8mf8x4_t () {vint8mf8x4_t t;}
+void f_vuint8mf8x4_t () {vuint8mf8x4_t t;}
+void f_vint8mf8x5_t () {vint8mf8x5_t t;}
+void f_vuint8mf8x5_t () {vuint8mf8x5_t t;}
+void f_vint8mf8x6_t () {vint8mf8x6_t t;}
+void f_vuint8mf8x6_t () {vuint8mf8x6_t t;}
+void f_vint8mf8x7_t () {vint8mf8x7_t t;}
+void f_vuint8mf8x7_t () {vuint8mf8x7_t t;}
+void f_vint8mf8x8_t () {vint8mf8x8_t t;}
+void f_vuint8mf8x8_t () {vuint8mf8x8_t t;}
+void f_vint8mf4x2_t () {vint8mf4x2_t t;}
+void f_vuint8mf4x2_t () {vuint8mf4x2_t t;}
+void f_vint8mf4x3_t () {vint8mf4x3_t t;}
+void f_vuint8mf4x3_t () {vuint8mf4x3_t t;}
+void f_vint8mf4x4_t () {vint8mf4x4_t t;}
+void f_vuint8mf4x4_t () {vuint8mf4x4_t t;}
+void f_vint8mf4x5_t () {vint8mf4x5_t t;}
+void f_vuint8mf4x5_t () {vuint8mf4x5_t t;}
+void f_vint8mf4x6_t () {vint8mf4x6_t t;}
+void f_vuint8mf4x6_t () {vuint8mf4x6_t t;}
+void f_vint8mf4x7_t () {vint8mf4x7_t t;}
+void f_vuint8mf4x7_t () {vuint8mf4x7_t t;}
+void f_vint8mf4x8_t () {vint8mf4x8_t t;}
+void f_vuint8mf4x8_t () {vuint8mf4x8_t t;}
+void f_vint8mf2x2_t () {vint8mf2x2_t t;}
+void f_vuint8mf2x2_t () {vuint8mf2x2_t t;}
+void f_vint8mf2x3_t () {vint8mf2x3_t t;}
+void f_vuint8mf2x3_t () {vuint8mf2x3_t t;}
+void f_vint8mf2x4_t () {vint8mf2x4_t t;}
+void f_vuint8mf2x4_t () {vuint8mf2x4_t t;}
+void f_vint8mf2x5_t () {vint8mf2x5_t t;}
+void f_vuint8mf2x5_t () {vuint8mf2x5_t t;}
+void f_vint8mf2x6_t () {vint8mf2x6_t t;}
+void f_vuint8mf2x6_t () {vuint8mf2x6_t t;}
+void f_vint8mf2x7_t () {vint8mf2x7_t t;}
+void f_vuint8mf2x7_t () {vuint8mf2x7_t t;}
+void f_vint8mf2x8_t () {vint8mf2x8_t t;}
+void f_vuint8mf2x8_t () {vuint8mf2x8_t t;}
+void f_vint8m1x2_t () {vint8m1x2_t t;}
+void f_vuint8m1x2_t () {vuint8m1x2_t t;}
+void f_vint8m1x3_t () {vint8m1x3_t t;}
+void f_vuint8m1x3_t () {vuint8m1x3_t t;}
+void f_vint8m1x4_t () {vint8m1x4_t t;}
+void f_vuint8m1x4_t () {vuint8m1x4_t t;}
+void f_vint8m1x5_t () {vint8m1x5_t t;}
+void f_vuint8m1x5_t () {vuint8m1x5_t t;}
+void f_vint8m1x6_t () {vint8m1x6_t t;}
+void f_vuint8m1x6_t () {vuint8m1x6_t t;}
+void f_vint8m1x7_t () {vint8m1x7_t t;}
+void f_vuint8m1x7_t () {vuint8m1x7_t t;}
+void f_vint8m1x8_t () {vint8m1x8_t t;}
+void f_vuint8m1x8_t () {vuint8m1x8_t t;}
+void f_vint8m2x2_t () {vint8m2x2_t t;}
+void f_vuint8m2x2_t () {vuint8m2x2_t t;}
+void f_vint8m2x3_t () {vint8m2x3_t t;}
+void f_vuint8m2x3_t () {vuint8m2x3_t t;}
+void f_vint8m2x4_t () {vint8m2x4_t t;}
+void f_vuint8m2x4_t () {vuint8m2x4_t t;}
+void f_vint8m4x2_t () {vint8m4x2_t t;}
+void f_vuint8m4x2_t () {vuint8m4x2_t t;}
+void f_vint16mf4x2_t () {vint16mf4x2_t t;}
+void f_vuint16mf4x2_t () {vuint16mf4x2_t t;}
+void f_vint16mf4x3_t () {vint16mf4x3_t t;}
+void f_vuint16mf4x3_t () {vuint16mf4x3_t t;}
+void f_vint16mf4x4_t () {vint16mf4x4_t t;}
+void f_vuint16mf4x4_t () {vuint16mf4x4_t t;}
+void f_vint16mf4x5_t () {vint16mf4x5_t t;}
+void f_vuint16mf4x5_t () {vuint16mf4x5_t t;}
+void f_vint16mf4x6_t () {vint16mf4x6_t t;}
+void f_vuint16mf4x6_t () {vuint16mf4x6_t t;}
+void f_vint16mf4x7_t () {vint16mf4x7_t t;}
+void f_vuint16mf4x7_t () {vuint16mf4x7_t t;}
+void f_vint16mf4x8_t () {vint16mf4x8_t t;}
+void f_vuint16mf4x8_t () {vuint16mf4x8_t t;}
+void f_vint16mf2x2_t () {vint16mf2x2_t t;}
+void f_vuint16mf2x2_t () {vuint16mf2x2_t t;}
+void f_vint16mf2x3_t () {vint16mf2x3_t t;}
+void f_vuint16mf2x3_t () {vuint16mf2x3_t t;}
+void f_vint16mf2x4_t () {vint16mf2x4_t t;}
+void f_vuint16mf2x4_t () {vuint16mf2x4_t t;}
+void f_vint16mf2x5_t () {vint16mf2x5_t t;}
+void f_vuint16mf2x5_t () {vuint16mf2x5_t t;}
+void f_vint16mf2x6_t () {vint16mf2x6_t t;}
+void f_vuint16mf2x6_t () {vuint16mf2x6_t t;}
+void f_vint16mf2x7_t () {vint16mf2x7_t t;}
+void f_vuint16mf2x7_t () {vuint16mf2x7_t t;}
+void f_vint16mf2x8_t () {vint16mf2x8_t t;}
+void f_vuint16mf2x8_t () {vuint16mf2x8_t t;}
+void f_vint16m1x2_t () {vint16m1x2_t t;}
+void f_vuint16m1x2_t () {vuint16m1x2_t t;}
+void f_vint16m1x3_t () {vint16m1x3_t t;}
+void f_vuint16m1x3_t () {vuint16m1x3_t t;}
+void f_vint16m1x4_t () {vint16m1x4_t t;}
+void f_vuint16m1x4_t () {vuint16m1x4_t t;}
+void f_vint16m1x5_t () {vint16m1x5_t t;}
+void f_vuint16m1x5_t () {vuint16m1x5_t t;}
+void f_vint16m1x6_t () {vint16m1x6_t t;}
+void f_vuint16m1x6_t () {vuint16m1x6_t t;}
+void f_vint16m1x7_t () {vint16m1x7_t t;}
+void f_vuint16m1x7_t () {vuint16m1x7_t t;}
+void f_vint16m1x8_t () {vint16m1x8_t t;}
+void f_vuint16m1x8_t () {vuint16m1x8_t t;}
+void f_vint16m2x2_t () {vint16m2x2_t t;}
+void f_vuint16m2x2_t () {vuint16m2x2_t t;}
+void f_vint16m2x3_t () {vint16m2x3_t t;}
+void f_vuint16m2x3_t () {vuint16m2x3_t t;}
+void f_vint16m2x4_t () {vint16m2x4_t t;}
+void f_vuint16m2x4_t () {vuint16m2x4_t t;}
+void f_vint16m4x2_t () {vint16m4x2_t t;}
+void f_vuint16m4x2_t () {vuint16m4x2_t t;}
+void f_vint32mf2x2_t () {vint32mf2x2_t t;}
+void f_vuint32mf2x2_t () {vuint32mf2x2_t t;}
+void f_vint32mf2x3_t () {vint32mf2x3_t t;}
+void f_vuint32mf2x3_t () {vuint32mf2x3_t t;}
+void f_vint32mf2x4_t () {vint32mf2x4_t t;}
+void f_vuint32mf2x4_t () {vuint32mf2x4_t t;}
+void f_vint32mf2x5_t () {vint32mf2x5_t t;}
+void f_vuint32mf2x5_t () {vuint32mf2x5_t t;}
+void f_vint32mf2x6_t () {vint32mf2x6_t t;}
+void f_vuint32mf2x6_t () {vuint32mf2x6_t t;}
+void f_vint32mf2x7_t () {vint32mf2x7_t t;}
+void f_vuint32mf2x7_t () {vuint32mf2x7_t t;}
+void f_vint32mf2x8_t () {vint32mf2x8_t t;}
+void f_vuint32mf2x8_t () {vuint32mf2x8_t t;}
+void f_vint32m1x2_t () {vint32m1x2_t t;}
+void f_vuint32m1x2_t () {vuint32m1x2_t t;}
+void f_vint32m1x3_t () {vint32m1x3_t t;}
+void f_vuint32m1x3_t () {vuint32m1x3_t t;}
+void f_vint32m1x4_t () {vint32m1x4_t t;}
+void f_vuint32m1x4_t () {vuint32m1x4_t t;}
+void f_vint32m1x5_t () {vint32m1x5_t t;}
+void f_vuint32m1x5_t () {vuint32m1x5_t t;}
+void f_vint32m1x6_t () {vint32m1x6_t t;}
+void f_vuint32m1x6_t () {vuint32m1x6_t t;}
+void f_vint32m1x7_t () {vint32m1x7_t t;}
+void f_vuint32m1x7_t () {vuint32m1x7_t t;}
+void f_vint32m1x8_t () {vint32m1x8_t t;}
+void f_vuint32m1x8_t () {vuint32m1x8_t t;}
+void f_vint32m2x2_t () {vint32m2x2_t t;}
+void f_vuint32m2x2_t () {vuint32m2x2_t t;}
+void f_vint32m2x3_t () {vint32m2x3_t t;}
+void f_vuint32m2x3_t () {vuint32m2x3_t t;}
+void f_vint32m2x4_t () {vint32m2x4_t t;}
+void f_vuint32m2x4_t () {vuint32m2x4_t t;}
+void f_vint32m4x2_t () {vint32m4x2_t t;}
+void f_vuint32m4x2_t () {vuint32m4x2_t t;}
+void f_vint64m1x2_t () {vint64m1x2_t t;}
+void f_vuint64m1x2_t () {vuint64m1x2_t t;}
+void f_vint64m1x3_t () {vint64m1x3_t t;}
+void f_vuint64m1x3_t () {vuint64m1x3_t t;}
+void f_vint64m1x4_t () {vint64m1x4_t t;}
+void f_vuint64m1x4_t () {vuint64m1x4_t t;}
+void f_vint64m1x5_t () {vint64m1x5_t t;}
+void f_vuint64m1x5_t () {vuint64m1x5_t t;}
+void f_vint64m1x6_t () {vint64m1x6_t t;}
+void f_vuint64m1x6_t () {vuint64m1x6_t t;}
+void f_vint64m1x7_t () {vint64m1x7_t t;}
+void f_vuint64m1x7_t () {vuint64m1x7_t t;}
+void f_vint64m1x8_t () {vint64m1x8_t t;}
+void f_vuint64m1x8_t () {vuint64m1x8_t t;}
+void f_vint64m2x2_t () {vint64m2x2_t t;}
+void f_vuint64m2x2_t () {vuint64m2x2_t t;}
+void f_vint64m2x3_t () {vint64m2x3_t t;}
+void f_vuint64m2x3_t () {vuint64m2x3_t t;}
+void f_vint64m2x4_t () {vint64m2x4_t t;}
+void f_vuint64m2x4_t () {vuint64m2x4_t t;}
+void f_vint64m4x2_t () {vint64m4x2_t t;}
+void f_vuint64m4x2_t () {vuint64m4x2_t t;}
+void f_vfloat32mf2x2_t () {vfloat32mf2x2_t t;} /* { dg-error {unknown type name 'vfloat32mf2x2_t'} } */
+void f_vfloat32mf2x3_t () {vfloat32mf2x3_t t;} /* { dg-error {unknown type name 'vfloat32mf2x3_t'} } */
+void f_vfloat32mf2x4_t () {vfloat32mf2x4_t t;} /* { dg-error {unknown type name 'vfloat32mf2x4_t'} } */
+void f_vfloat32mf2x5_t () {vfloat32mf2x5_t t;} /* { dg-error {unknown type name 'vfloat32mf2x5_t'} } */
+void f_vfloat32mf2x6_t () {vfloat32mf2x6_t t;} /* { dg-error {unknown type name 'vfloat32mf2x6_t'} } */
+void f_vfloat32mf2x7_t () {vfloat32mf2x7_t t;} /* { dg-error {unknown type name 'vfloat32mf2x7_t'} } */
+void f_vfloat32mf2x8_t () {vfloat32mf2x8_t t;} /* { dg-error {unknown type name 'vfloat32mf2x8_t'} } */
+void f_vfloat32m1x2_t () {vfloat32m1x2_t t;} /* { dg-error {unknown type name 'vfloat32m1x2_t'} } */
+void f_vfloat32m1x3_t () {vfloat32m1x3_t t;} /* { dg-error {unknown type name 'vfloat32m1x3_t'} } */
+void f_vfloat32m1x4_t () {vfloat32m1x4_t t;} /* { dg-error {unknown type name 'vfloat32m1x4_t'} } */
+void f_vfloat32m1x5_t () {vfloat32m1x5_t t;} /* { dg-error {unknown type name 'vfloat32m1x5_t'} } */
+void f_vfloat32m1x6_t () {vfloat32m1x6_t t;} /* { dg-error {unknown type name 'vfloat32m1x6_t'} } */
+void f_vfloat32m1x7_t () {vfloat32m1x7_t t;} /* { dg-error {unknown type name 'vfloat32m1x7_t'} } */
+void f_vfloat32m1x8_t () {vfloat32m1x8_t t;} /* { dg-error {unknown type name 'vfloat32m1x8_t'} } */
+void f_vfloat32m2x2_t () {vfloat32m2x2_t t;} /* { dg-error {unknown type name 'vfloat32m2x2_t'} } */
+void f_vfloat32m2x3_t () {vfloat32m2x3_t t;} /* { dg-error {unknown type name 'vfloat32m2x3_t'} } */
+void f_vfloat32m2x4_t () {vfloat32m2x4_t t;} /* { dg-error {unknown type name 'vfloat32m2x4_t'} } */
+void f_vfloat32m4x2_t () {vfloat32m4x2_t t;} /* { dg-error {unknown type name 'vfloat32m4x2_t'} } */
+void f_vfloat64m1x2_t () {vfloat64m1x2_t t;} /* { dg-error {unknown type name 'vfloat64m1x2_t'} } */
+void f_vfloat64m1x3_t () {vfloat64m1x3_t t;} /* { dg-error {unknown type name 'vfloat64m1x3_t'} } */
+void f_vfloat64m1x4_t () {vfloat64m1x4_t t;} /* { dg-error {unknown type name 'vfloat64m1x4_t'} } */
+void f_vfloat64m1x5_t () {vfloat64m1x5_t t;} /* { dg-error {unknown type name 'vfloat64m1x5_t'} } */
+void f_vfloat64m1x6_t () {vfloat64m1x6_t t;} /* { dg-error {unknown type name 'vfloat64m1x6_t'} } */
+void f_vfloat64m1x7_t () {vfloat64m1x7_t t;} /* { dg-error {unknown type name 'vfloat64m1x7_t'} } */
+void f_vfloat64m1x8_t () {vfloat64m1x8_t t;} /* { dg-error {unknown type name 'vfloat64m1x8_t'} } */
+void f_vfloat64m2x2_t () {vfloat64m2x2_t t;} /* { dg-error {unknown type name 'vfloat64m2x2_t'} } */
+void f_vfloat64m2x3_t () {vfloat64m2x3_t t;} /* { dg-error {unknown type name 'vfloat64m2x3_t'} } */
+void f_vfloat64m2x4_t () {vfloat64m2x4_t t;} /* { dg-error {unknown type name 'vfloat64m2x4_t'} } */
+void f_vfloat64m4x2_t () {vfloat64m4x2_t t;} /* { dg-error {unknown type name 'vfloat64m4x2_t'} } */
-- 
2.36.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] RISC-V: Add tuple types support
  2023-04-18 12:09 [PATCH] RISC-V: Add tuple types support juzhe.zhong
@ 2023-05-03 10:40 ` Kito Cheng
  0 siblings, 0 replies; 3+ messages in thread
From: Kito Cheng @ 2023-05-03 10:40 UTC (permalink / raw)
  To: juzhe.zhong; +Cc: gcc-patches, palmer

Thanks, committed to trunk!

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH] RISC-V: Add tuple types support
@ 2023-04-18 12:04 juzhe.zhong
  0 siblings, 0 replies; 3+ messages in thread
From: juzhe.zhong @ 2023-04-18 12:04 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, Juzhe-Zhong

From: Juzhe-Zhong <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-modes.def (RVV_TUPLE_MODES): New macro.
        (RVV_TUPLE_PARTIAL_MODES): Ditto.
        * config/riscv/riscv-protos.h (riscv_v_ext_tuple_mode_p): New function.
        (get_nf): Ditto.
        (get_subpart_mode): Ditto.
        (get_tuple_mode): Ditto.
        (expand_tuple_move): Ditto.
        * config/riscv/riscv-v.cc (ENTRY): New macro.
        (TUPLE_ENTRY): Ditto.
        (get_nf): New function.
        (get_subpart_mode): Ditto.
        (get_tuple_mode): Ditto.
        (expand_tuple_move): Ditto.
        * config/riscv/riscv-vector-builtins.cc (DEF_RVV_TUPLE_TYPE): New macro.
        (register_tuple_type): New function
        * config/riscv/riscv-vector-builtins.def (DEF_RVV_TUPLE_TYPE): New macro.
        (vint8mf8x2_t): New macro.
        (vuint8mf8x2_t): Ditto.
        (vint8mf8x3_t): Ditto.
        (vuint8mf8x3_t): Ditto.
        (vint8mf8x4_t): Ditto.
        (vuint8mf8x4_t): Ditto.
        (vint8mf8x5_t): Ditto.
        (vuint8mf8x5_t): Ditto.
        (vint8mf8x6_t): Ditto.
        (vuint8mf8x6_t): Ditto.
        (vint8mf8x7_t): Ditto.
        (vuint8mf8x7_t): Ditto.
        (vint8mf8x8_t): Ditto.
        (vuint8mf8x8_t): Ditto.
        (vint8mf4x2_t): Ditto.
        (vuint8mf4x2_t): Ditto.
        (vint8mf4x3_t): Ditto.
        (vuint8mf4x3_t): Ditto.
        (vint8mf4x4_t): Ditto.
        (vuint8mf4x4_t): Ditto.
        (vint8mf4x5_t): Ditto.
        (vuint8mf4x5_t): Ditto.
        (vint8mf4x6_t): Ditto.
        (vuint8mf4x6_t): Ditto.
        (vint8mf4x7_t): Ditto.
        (vuint8mf4x7_t): Ditto.
        (vint8mf4x8_t): Ditto.
        (vuint8mf4x8_t): Ditto.
        (vint8mf2x2_t): Ditto.
        (vuint8mf2x2_t): Ditto.
        (vint8mf2x3_t): Ditto.
        (vuint8mf2x3_t): Ditto.
        (vint8mf2x4_t): Ditto.
        (vuint8mf2x4_t): Ditto.
        (vint8mf2x5_t): Ditto.
        (vuint8mf2x5_t): Ditto.
        (vint8mf2x6_t): Ditto.
        (vuint8mf2x6_t): Ditto.
        (vint8mf2x7_t): Ditto.
        (vuint8mf2x7_t): Ditto.
        (vint8mf2x8_t): Ditto.
        (vuint8mf2x8_t): Ditto.
        (vint8m1x2_t): Ditto.
        (vuint8m1x2_t): Ditto.
        (vint8m1x3_t): Ditto.
        (vuint8m1x3_t): Ditto.
        (vint8m1x4_t): Ditto.
        (vuint8m1x4_t): Ditto.
        (vint8m1x5_t): Ditto.
        (vuint8m1x5_t): Ditto.
        (vint8m1x6_t): Ditto.
        (vuint8m1x6_t): Ditto.
        (vint8m1x7_t): Ditto.
        (vuint8m1x7_t): Ditto.
        (vint8m1x8_t): Ditto.
        (vuint8m1x8_t): Ditto.
        (vint8m2x2_t): Ditto.
        (vuint8m2x2_t): Ditto.
        (vint8m2x3_t): Ditto.
        (vuint8m2x3_t): Ditto.
        (vint8m2x4_t): Ditto.
        (vuint8m2x4_t): Ditto.
        (vint8m4x2_t): Ditto.
        (vuint8m4x2_t): Ditto.
        (vint16mf4x2_t): Ditto.
        (vuint16mf4x2_t): Ditto.
        (vint16mf4x3_t): Ditto.
        (vuint16mf4x3_t): Ditto.
        (vint16mf4x4_t): Ditto.
        (vuint16mf4x4_t): Ditto.
        (vint16mf4x5_t): Ditto.
        (vuint16mf4x5_t): Ditto.
        (vint16mf4x6_t): Ditto.
        (vuint16mf4x6_t): Ditto.
        (vint16mf4x7_t): Ditto.
        (vuint16mf4x7_t): Ditto.
        (vint16mf4x8_t): Ditto.
        (vuint16mf4x8_t): Ditto.
        (vint16mf2x2_t): Ditto.
        (vuint16mf2x2_t): Ditto.
        (vint16mf2x3_t): Ditto.
        (vuint16mf2x3_t): Ditto.
        (vint16mf2x4_t): Ditto.
        (vuint16mf2x4_t): Ditto.
        (vint16mf2x5_t): Ditto.
        (vuint16mf2x5_t): Ditto.
        (vint16mf2x6_t): Ditto.
        (vuint16mf2x6_t): Ditto.
        (vint16mf2x7_t): Ditto.
        (vuint16mf2x7_t): Ditto.
        (vint16mf2x8_t): Ditto.
        (vuint16mf2x8_t): Ditto.
        (vint16m1x2_t): Ditto.
        (vuint16m1x2_t): Ditto.
        (vint16m1x3_t): Ditto.
        (vuint16m1x3_t): Ditto.
        (vint16m1x4_t): Ditto.
        (vuint16m1x4_t): Ditto.
        (vint16m1x5_t): Ditto.
        (vuint16m1x5_t): Ditto.
        (vint16m1x6_t): Ditto.
        (vuint16m1x6_t): Ditto.
        (vint16m1x7_t): Ditto.
        (vuint16m1x7_t): Ditto.
        (vint16m1x8_t): Ditto.
        (vuint16m1x8_t): Ditto.
        (vint16m2x2_t): Ditto.
        (vuint16m2x2_t): Ditto.
        (vint16m2x3_t): Ditto.
        (vuint16m2x3_t): Ditto.
        (vint16m2x4_t): Ditto.
        (vuint16m2x4_t): Ditto.
        (vint16m4x2_t): Ditto.
        (vuint16m4x2_t): Ditto.
        (vint32mf2x2_t): Ditto.
        (vuint32mf2x2_t): Ditto.
        (vint32mf2x3_t): Ditto.
        (vuint32mf2x3_t): Ditto.
        (vint32mf2x4_t): Ditto.
        (vuint32mf2x4_t): Ditto.
        (vint32mf2x5_t): Ditto.
        (vuint32mf2x5_t): Ditto.
        (vint32mf2x6_t): Ditto.
        (vuint32mf2x6_t): Ditto.
        (vint32mf2x7_t): Ditto.
        (vuint32mf2x7_t): Ditto.
        (vint32mf2x8_t): Ditto.
        (vuint32mf2x8_t): Ditto.
        (vint32m1x2_t): Ditto.
        (vuint32m1x2_t): Ditto.
        (vint32m1x3_t): Ditto.
        (vuint32m1x3_t): Ditto.
        (vint32m1x4_t): Ditto.
        (vuint32m1x4_t): Ditto.
        (vint32m1x5_t): Ditto.
        (vuint32m1x5_t): Ditto.
        (vint32m1x6_t): Ditto.
        (vuint32m1x6_t): Ditto.
        (vint32m1x7_t): Ditto.
        (vuint32m1x7_t): Ditto.
        (vint32m1x8_t): Ditto.
        (vuint32m1x8_t): Ditto.
        (vint32m2x2_t): Ditto.
        (vuint32m2x2_t): Ditto.
        (vint32m2x3_t): Ditto.
        (vuint32m2x3_t): Ditto.
        (vint32m2x4_t): Ditto.
        (vuint32m2x4_t): Ditto.
        (vint32m4x2_t): Ditto.
        (vuint32m4x2_t): Ditto.
        (vint64m1x2_t): Ditto.
        (vuint64m1x2_t): Ditto.
        (vint64m1x3_t): Ditto.
        (vuint64m1x3_t): Ditto.
        (vint64m1x4_t): Ditto.
        (vuint64m1x4_t): Ditto.
        (vint64m1x5_t): Ditto.
        (vuint64m1x5_t): Ditto.
        (vint64m1x6_t): Ditto.
        (vuint64m1x6_t): Ditto.
        (vint64m1x7_t): Ditto.
        (vuint64m1x7_t): Ditto.
        (vint64m1x8_t): Ditto.
        (vuint64m1x8_t): Ditto.
        (vint64m2x2_t): Ditto.
        (vuint64m2x2_t): Ditto.
        (vint64m2x3_t): Ditto.
        (vuint64m2x3_t): Ditto.
        (vint64m2x4_t): Ditto.
        (vuint64m2x4_t): Ditto.
        (vint64m4x2_t): Ditto.
        (vuint64m4x2_t): Ditto.
        (vfloat32mf2x2_t): Ditto.
        (vfloat32mf2x3_t): Ditto.
        (vfloat32mf2x4_t): Ditto.
        (vfloat32mf2x5_t): Ditto.
        (vfloat32mf2x6_t): Ditto.
        (vfloat32mf2x7_t): Ditto.
        (vfloat32mf2x8_t): Ditto.
        (vfloat32m1x2_t): Ditto.
        (vfloat32m1x3_t): Ditto.
        (vfloat32m1x4_t): Ditto.
        (vfloat32m1x5_t): Ditto.
        (vfloat32m1x6_t): Ditto.
        (vfloat32m1x7_t): Ditto.
        (vfloat32m1x8_t): Ditto.
        (vfloat32m2x2_t): Ditto.
        (vfloat32m2x3_t): Ditto.
        (vfloat32m2x4_t): Ditto.
        (vfloat32m4x2_t): Ditto.
        (vfloat64m1x2_t): Ditto.
        (vfloat64m1x3_t): Ditto.
        (vfloat64m1x4_t): Ditto.
        (vfloat64m1x5_t): Ditto.
        (vfloat64m1x6_t): Ditto.
        (vfloat64m1x7_t): Ditto.
        (vfloat64m1x8_t): Ditto.
        (vfloat64m2x2_t): Ditto.
        (vfloat64m2x3_t): Ditto.
        (vfloat64m2x4_t): Ditto.
        (vfloat64m4x2_t): Ditto.
        * config/riscv/riscv-vector-builtins.h (DEF_RVV_TUPLE_TYPE): Ditto.
        * config/riscv/riscv-vector-switch.def (TUPLE_ENTRY): Ditto.
        * config/riscv/riscv.cc (riscv_v_ext_tuple_mode_p): New function.
        (TUPLE_ENTRY): Ditto.
        (riscv_v_ext_mode_p): New function.
        (riscv_v_adjust_nunits): Add tuple mode adjustment.
        (riscv_classify_address): Ditto.
        (riscv_binary_cost): Ditto.
        (riscv_rtx_costs): Ditto.
        (riscv_secondary_memory_needed): Ditto.
        (riscv_hard_regno_nregs): Ditto.
        (riscv_hard_regno_mode_ok): Ditto.
        (riscv_vector_mode_supported_p): Ditto.
        (riscv_regmode_natural_size): Ditto.
        (riscv_array_mode): New function.
        (TARGET_ARRAY_MODE): New target hook.
        * config/riscv/riscv.md: Add tuple modes.
        * config/riscv/vector-iterators.md: Ditto.
        * config/riscv/vector.md (mov<mode>): Add tuple modes data movement.
        (*mov<VT:mode>_<P:mode>): Ditto.

---
 gcc/config/riscv/riscv-modes.def           | 133 ++++++++++++
 gcc/config/riscv/riscv-protos.h            |   5 +
 gcc/config/riscv/riscv-v.cc                | 188 +++++++++++++++-
 gcc/config/riscv/riscv-vector-builtins.cc  |  78 +++++++
 gcc/config/riscv/riscv-vector-builtins.def | 237 +++++++++++++++++++++
 gcc/config/riscv/riscv-vector-builtins.h   |   1 +
 gcc/config/riscv/riscv-vector-switch.def   | 176 +++++++++++++++
 gcc/config/riscv/riscv.cc                  | 101 +++++++--
 gcc/config/riscv/riscv.md                  |  27 ++-
 gcc/config/riscv/vector-iterators.md       | 186 ++++++++++++++++
 gcc/config/riscv/vector.md                 |  44 ++++
 11 files changed, 1157 insertions(+), 19 deletions(-)

diff --git a/gcc/config/riscv/riscv-modes.def b/gcc/config/riscv/riscv-modes.def
index b1669609eec..19a4f9fb3db 100644
--- a/gcc/config/riscv/riscv-modes.def
+++ b/gcc/config/riscv/riscv-modes.def
@@ -185,6 +185,139 @@ VECTOR_MODE_WITH_PREFIX (VNx, INT, QI, 1, 0);
 ADJUST_NUNITS (VNx1QI, riscv_v_adjust_nunits (VNx1QImode, 1));
 ADJUST_ALIGNMENT (VNx1QI, 1);
 
+/* Tuple modes for segment loads/stores according to NF, NF value can be 2 ~ 8.  */
+
+/*
+   | Mode           | MIN_VLEN=32 | MIN_VLEN=32 | MIN_VLEN=64 | MIN_VLEN=64 | MIN_VLEN=128 | MIN_VLEN=128 |
+   |                | LMUL        | SEW/LMUL    | LMUL        | SEW/LMUL    | LMUL         | SEW/LMUL     |
+   | VNxNFx1QI      | MF4         | 32          | MF8         | 64          | N/A          | N/A          |
+   | VNxNFx2QI      | MF2         | 16          | MF4         | 32          | MF8          | 64           |
+   | VNxNFx4QI      | M1          | 8           | MF2         | 16          | MF4          | 32           |
+   | VNxNFx8QI      | M2          | 4           | M1          | 8           | MF2          | 16           |
+   | VNxNFx16QI     | M4          | 2           | M2          | 4           | M1           | 8            |
+   | VNxNFx32QI     | M8          | 1           | M4          | 2           | M2           | 4            |
+   | VNxNFx64QI     | N/A         | N/A         | M8          | 1           | M4           | 2            |
+   | VNxNFx128QI    | N/A         | N/A         | N/A         | N/A         | M8           | 1            |
+   | VNxNFx1(HI|HF) | MF2         | 32          | MF4         | 64          | N/A          | N/A          |
+   | VNxNFx2(HI|HF) | M1          | 16          | MF2         | 32          | MF4          | 64           |
+   | VNxNFx4(HI|HF) | M2          | 8           | M1          | 16          | MF2          | 32           |
+   | VNxNFx8(HI|HF) | M4          | 4           | M2          | 8           | M1           | 16           |
+   | VNxNFx16(HI|HF)| M8          | 2           | M4          | 4           | M2           | 8            |
+   | VNxNFx32(HI|HF)| N/A         | N/A         | M8          | 2           | M4           | 4            |
+   | VNxNFx64(HI|HF)| N/A         | N/A         | N/A         | N/A         | M8           | 2            |
+   | VNxNFx1(SI|SF) | M1          | 32          | MF2         | 64          | MF2          | 64           |
+   | VNxNFx2(SI|SF) | M2          | 16          | M1          | 32          | M1           | 32           |
+   | VNxNFx4(SI|SF) | M4          | 8           | M2          | 16          | M2           | 16           |
+   | VNxNFx8(SI|SF) | M8          | 4           | M4          | 8           | M4           | 8            |
+   | VNxNFx16(SI|SF)| N/A         | N/A         | M8          | 4           | M8           | 4            |
+   | VNxNFx1(DI|DF) | N/A         | N/A         | M1          | 64          | N/A          | N/A          |
+   | VNxNFx2(DI|DF) | N/A         | N/A         | M2          | 32          | M1           | 64           |
+   | VNxNFx4(DI|DF) | N/A         | N/A         | M4          | 16          | M2           | 32           |
+   | VNxNFx8(DI|DF) | N/A         | N/A         | M8          | 8           | M4           | 16           |
+   | VNxNFx16(DI|DF)| N/A         | N/A         | N/A         | N/A         | M8           | 8            |
+*/
+
+#define RVV_TUPLE_MODES(NBYTES, NSUBPARTS, VB, VH, VS, VD)                     \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, NBYTES, 1);             \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, NBYTES / 2, 1);         \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, NBYTES / 4, 1);         \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, NBYTES / 4, 1);       \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, DI, NBYTES / 8, 1);         \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, DF, NBYTES / 8, 1);       \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VB##QI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VB##QI##mode,       \
+					VB * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VH##HI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VH##HI##mode,       \
+					VH * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SI##mode,       \
+					VS * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DI,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DI##mode,       \
+					VD * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VS##SF,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VS##SF##mode,       \
+					VS * NSUBPARTS));                      \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x##VD##DF,                                    \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x##VD##DF##mode,       \
+					VD * NSUBPARTS));                      \
+                                                                               \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VB##QI, 1);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VH##HI, 2);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SI, 4);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DI, 8);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VS##SF, 4);                             \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x##VD##DF, 8);
+
+RVV_TUPLE_MODES (8, 2, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 3, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 4, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 5, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 6, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 7, 8, 4, 2, 1)
+RVV_TUPLE_MODES (8, 8, 8, 4, 2, 1)
+
+RVV_TUPLE_MODES (16, 2, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 3, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 4, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 5, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 6, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 7, 16, 8, 4, 2)
+RVV_TUPLE_MODES (16, 8, 16, 8, 4, 2)
+
+RVV_TUPLE_MODES (32, 2, 32, 16, 8, 4)
+RVV_TUPLE_MODES (32, 3, 32, 16, 8, 4)
+RVV_TUPLE_MODES (32, 4, 32, 16, 8, 4)
+
+RVV_TUPLE_MODES (64, 2, 64, 32, 16, 8)
+
+#define RVV_TUPLE_PARTIAL_MODES(NSUBPARTS)                                     \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 1, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 1, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, SI, 1, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, FLOAT, SF, 1, 1);                \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 2, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, HI, 2, 1);                  \
+  VECTOR_MODE_WITH_PREFIX (VNx##NSUBPARTS##x, INT, QI, 4, 1);                  \
+                                                                               \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1QI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1QI##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1HI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1HI##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1SI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SI##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x1SF,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x1SF##mode,            \
+					NSUBPARTS));                           \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x2QI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x2QI##mode,            \
+					2 * NSUBPARTS));                       \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x2HI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x2HI##mode,            \
+					2 * NSUBPARTS));                       \
+  ADJUST_NUNITS (VNx##NSUBPARTS##x4QI,                                         \
+		 riscv_v_adjust_nunits (VNx##NSUBPARTS##x4QI##mode,            \
+					4 * NSUBPARTS));                       \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1QI, 1);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1HI, 2);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SI, 4);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x1SF, 4);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2QI, 1);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x2HI, 2);                                  \
+  ADJUST_ALIGNMENT (VNx##NSUBPARTS##x4QI, 1);
+
+RVV_TUPLE_PARTIAL_MODES (2)
+RVV_TUPLE_PARTIAL_MODES (3)
+RVV_TUPLE_PARTIAL_MODES (4)
+RVV_TUPLE_PARTIAL_MODES (5)
+RVV_TUPLE_PARTIAL_MODES (6)
+RVV_TUPLE_PARTIAL_MODES (7)
+RVV_TUPLE_PARTIAL_MODES (8)
+
 /* TODO: According to RISC-V 'V' ISA spec, the maximun vector length can
    be 65536 for a single vector register which means the vector mode in
    GCC can be maximum = 65536 * 8 bits (LMUL=8).
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 5244e8dcbf0..96ab8dd3629 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -78,6 +78,7 @@ extern bool riscv_gpr_save_operation_p (rtx);
 extern void riscv_reinit (void);
 extern poly_uint64 riscv_regmode_natural_size (machine_mode);
 extern bool riscv_v_ext_vector_mode_p (machine_mode);
+extern bool riscv_v_ext_tuple_mode_p (machine_mode);
 extern bool riscv_shamt_matches_mask_p (int, HOST_WIDE_INT);
 
 /* Routines implemented in riscv-c.cc.  */
@@ -165,6 +166,8 @@ void emit_vlmax_op (unsigned, rtx, rtx, rtx, machine_mode);
 void emit_nonvlmax_op (unsigned, rtx, rtx, rtx, machine_mode);
 enum vlmul_type get_vlmul (machine_mode);
 unsigned int get_ratio (machine_mode);
+unsigned int get_nf (machine_mode);
+machine_mode get_subpart_mode (machine_mode);
 int get_ta (rtx);
 int get_ma (rtx);
 int get_avl_type (rtx);
@@ -186,6 +189,7 @@ enum tail_policy get_prefer_tail_policy ();
 enum mask_policy get_prefer_mask_policy ();
 rtx get_avl_type_rtx (enum avl_type);
 opt_machine_mode get_vector_mode (scalar_mode, poly_uint64);
+opt_machine_mode get_tuple_mode (machine_mode, unsigned int);
 bool simm5_p (rtx);
 bool neg_simm5_p (rtx);
 #ifdef RTX_CODE
@@ -207,6 +211,7 @@ enum vlen_enum
 bool slide1_sew64_helper (int, machine_mode, machine_mode,
 			  machine_mode, rtx *);
 rtx gen_avl_for_scalar_move (rtx);
+void expand_tuple_move (machine_mode, rtx *);
 }
 
 /* We classify builtin types into two classes:
diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc
index 99c414cc910..3950aa80338 100644
--- a/gcc/config/riscv/riscv-v.cc
+++ b/gcc/config/riscv/riscv-v.cc
@@ -342,17 +342,32 @@ struct mode_vtype_group
   uint8_t ratio_for_min_vlen64[NUM_MACHINE_MODES];
   enum vlmul_type vlmul_for_for_vlen128[NUM_MACHINE_MODES];
   uint8_t ratio_for_for_vlen128[NUM_MACHINE_MODES];
+  machine_mode subpart_mode[NUM_MACHINE_MODES];
+  uint8_t nf[NUM_MACHINE_MODES];
   mode_vtype_group ()
   {
 #define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLEN32,   \
 	      VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64,                      \
-	      VLMUL_FOR_FOR_VLEN128, RATIO_FOR_FOR_VLEN128)                    \
+	      VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)                    \
   vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;                     \
   ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;                     \
   vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;                     \
   ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;                     \
-  vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_FOR_VLEN128;                   \
-  ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_FOR_VLEN128;
+  vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;                   \
+  ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
+#include "riscv-vector-switch.def"
+#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
+		    RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64,                \
+		    RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128,               \
+		    RATIO_FOR_MIN_VLEN128)                                     \
+  subpart_mode[MODE##mode] = SUBPART_MODE##mode;                               \
+  nf[MODE##mode] = NF;                                                         \
+  vlmul_for_min_vlen32[MODE##mode] = VLMUL_FOR_MIN_VLEN32;                     \
+  ratio_for_min_vlen32[MODE##mode] = RATIO_FOR_MIN_VLEN32;                     \
+  vlmul_for_min_vlen64[MODE##mode] = VLMUL_FOR_MIN_VLEN64;                     \
+  ratio_for_min_vlen64[MODE##mode] = RATIO_FOR_MIN_VLEN64;                     \
+  vlmul_for_for_vlen128[MODE##mode] = VLMUL_FOR_MIN_VLEN128;                   \
+  ratio_for_for_vlen128[MODE##mode] = RATIO_FOR_MIN_VLEN128;
 #include "riscv-vector-switch.def"
   }
 };
@@ -371,6 +386,26 @@ get_vlmul (machine_mode mode)
     return mode_vtype_infos.vlmul_for_min_vlen64[mode];
 }
 
+/* Return the NF value of the corresponding mode.  */
+unsigned int
+get_nf (machine_mode mode)
+{
+  /* We don't allow non-tuple modes go through this function.  */
+  gcc_assert (riscv_v_ext_tuple_mode_p (mode));
+  return mode_vtype_infos.nf[mode];
+}
+
+/* Return the subpart mode of the tuple mode. For VNx2x1SImode,
+   the subpart mode is VNx1SImode. This will help to build
+   array/struct type in builtins.  */
+machine_mode
+get_subpart_mode (machine_mode mode)
+{
+  /* We don't allow non-tuple modes go through this function.  */
+  gcc_assert (riscv_v_ext_tuple_mode_p (mode));
+  return mode_vtype_infos.subpart_mode[mode];
+}
+
 /* Get ratio according to machine mode.  */
 unsigned int
 get_ratio (machine_mode mode)
@@ -452,6 +487,24 @@ get_vector_mode (scalar_mode inner_mode, poly_uint64 nunits)
   return opt_machine_mode ();
 }
 
+/* Return the RVV tuple mode if we can find the legal tuple mode for the
+   corresponding subpart mode and NF.  */
+opt_machine_mode
+get_tuple_mode (machine_mode subpart_mode, unsigned int nf)
+{
+  poly_uint64 nunits = GET_MODE_NUNITS (subpart_mode) * nf;
+  scalar_mode inner_mode = GET_MODE_INNER (subpart_mode);
+  enum mode_class mclass = GET_MODE_CLASS (subpart_mode);
+  machine_mode mode;
+  FOR_EACH_MODE_IN_CLASS (mode, mclass)
+    if (inner_mode == GET_MODE_INNER (mode)
+	&& known_eq (nunits, GET_MODE_NUNITS (mode))
+	&& riscv_v_ext_tuple_mode_p (mode)
+	&& get_subpart_mode (mode) == subpart_mode)
+      return mode;
+  return opt_machine_mode ();
+}
+
 bool
 simm5_p (rtx x)
 {
@@ -742,4 +795,133 @@ gen_avl_for_scalar_move (rtx avl)
     }
 }
 
+/* Expand tuple modes data movement for.  */
+void
+expand_tuple_move (machine_mode mask_mode, rtx *ops)
+{
+  unsigned int i;
+  machine_mode tuple_mode = GET_MODE (ops[0]);
+  machine_mode subpart_mode = get_subpart_mode (tuple_mode);
+  poly_int64 subpart_size = GET_MODE_SIZE (subpart_mode);
+  unsigned int nf = get_nf (tuple_mode);
+  bool fractional_p = known_lt (subpart_size, BYTES_PER_RISCV_VECTOR);
+
+  if (REG_P (ops[0]) && CONST_VECTOR_P (ops[1]))
+    {
+      rtx val;
+      gcc_assert (can_create_pseudo_p ()
+		  && const_vec_duplicate_p (ops[1], &val));
+      for (i = 0; i < nf; ++i)
+	{
+	  poly_int64 offset = i * subpart_size;
+	  rtx subreg
+	    = simplify_gen_subreg (subpart_mode, ops[0], tuple_mode, offset);
+	  rtx dup = gen_const_vec_duplicate (subpart_mode, val);
+	  emit_move_insn (subreg, dup);
+	}
+    }
+  else if (REG_P (ops[0]) && REG_P (ops[1]))
+    {
+      for (i = 0; i < nf; ++i)
+	{
+	  int index = i;
+
+	  /* Take NF = 2 and LMUL = 1 for example:
+
+	      - move v8 to v9:
+		 vmv1r v10,v9
+		 vmv1r v9,v8
+
+	      - move v8 to v7:
+		 vmv1r v7,v8
+		 vmv1r v8,v9  */
+	  if (REGNO (ops[0]) > REGNO (ops[1]))
+	    index = nf - 1 - i;
+	  poly_int64 offset = index * subpart_size;
+	  rtx dst_subreg
+	    = simplify_gen_subreg (subpart_mode, ops[0], tuple_mode, offset);
+	  rtx src_subreg
+	    = simplify_gen_subreg (subpart_mode, ops[1], tuple_mode, offset);
+	  emit_insn (gen_rtx_SET (dst_subreg, src_subreg));
+	}
+    }
+  else
+    {
+      /* Expand tuple memory data movement.  */
+      gcc_assert (MEM_P (ops[0]) || MEM_P (ops[1]));
+      rtx offset = gen_int_mode (subpart_size, Pmode);
+      if (!subpart_size.is_constant ())
+	{
+	  emit_move_insn (ops[2], gen_int_mode (BYTES_PER_RISCV_VECTOR, Pmode));
+	  if (fractional_p)
+	    {
+	      unsigned int factor
+		= exact_div (BYTES_PER_RISCV_VECTOR, subpart_size)
+		    .to_constant ();
+	      rtx pat
+		= gen_rtx_ASHIFTRT (Pmode, ops[2],
+				    gen_int_mode (exact_log2 (factor), Pmode));
+	      emit_insn (gen_rtx_SET (ops[2], pat));
+	    }
+
+	  if (known_gt (subpart_size, BYTES_PER_RISCV_VECTOR))
+	    {
+	      unsigned int factor
+		= exact_div (subpart_size, BYTES_PER_RISCV_VECTOR)
+		    .to_constant ();
+	      rtx pat
+		= gen_rtx_ASHIFT (Pmode, ops[2],
+				  gen_int_mode (exact_log2 (factor), Pmode));
+	      emit_insn (gen_rtx_SET (ops[2], pat));
+	    }
+	  offset = ops[2];
+	}
+
+      if (MEM_P (ops[1]))
+	{
+	  /* Load operations.  */
+	  emit_move_insn (ops[3], XEXP (ops[1], 0));
+	  for (i = 0; i < nf; i++)
+	    {
+	      rtx subreg = simplify_gen_subreg (subpart_mode, ops[0],
+						tuple_mode, i * subpart_size);
+	      if (i != 0)
+		{
+		  rtx new_addr = gen_rtx_PLUS (Pmode, ops[3], offset);
+		  emit_insn (gen_rtx_SET (ops[3], new_addr));
+		}
+	      rtx mem = gen_rtx_MEM (subpart_mode, ops[3]);
+
+	      if (fractional_p)
+		emit_vlmax_op (code_for_pred_mov (subpart_mode), subreg, mem,
+			       ops[4], mask_mode);
+	      else
+		emit_move_insn (subreg, mem);
+	    }
+	}
+      else
+	{
+	  /* Store operations.  */
+	  emit_move_insn (ops[3], XEXP (ops[0], 0));
+	  for (i = 0; i < nf; i++)
+	    {
+	      rtx subreg = simplify_gen_subreg (subpart_mode, ops[1],
+						tuple_mode, i * subpart_size);
+	      if (i != 0)
+		{
+		  rtx new_addr = gen_rtx_PLUS (Pmode, ops[3], offset);
+		  emit_insn (gen_rtx_SET (ops[3], new_addr));
+		}
+	      rtx mem = gen_rtx_MEM (subpart_mode, ops[3]);
+
+	      if (fractional_p)
+		emit_vlmax_op (code_for_pred_mov (subpart_mode), mem, subreg,
+			       ops[4], mask_mode);
+	      else
+		emit_move_insn (mem, subreg);
+	    }
+	}
+    }
+}
+
 } // namespace riscv_vector
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 434bd8e157b..3cfa9c90181 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -95,6 +95,8 @@ struct registered_function_hasher : nofree_ptr_hash<registered_function>
 static CONSTEXPR const vector_type_info vector_types[] = {
 #define DEF_RVV_TYPE(NAME, NCHARS, ABI_NAME, ARGS...)                          \
   {#NAME, #ABI_NAME, "u" #NCHARS #ABI_NAME},
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, ARGS...)                    \
+  {#NAME, #ABI_NAME, "u" #NCHARS #ABI_NAME},
 #include "riscv-vector-builtins.def"
 };
 
@@ -112,6 +114,9 @@ const rvv_builtin_suffixes type_suffixes[NUM_VECTOR_TYPES + 1] = {
 		     VECTOR_MODE_MIN_VLEN_32, VECTOR_SUFFIX, SCALAR_SUFFIX,    \
 		     VSETVL_SUFFIX)                                            \
   {#VECTOR_SUFFIX, #SCALAR_SUFFIX, #VSETVL_SUFFIX},
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE,  \
+			   NF, VECTOR_SUFFIX)                                  \
+  {#VECTOR_SUFFIX, "", ""},
 #include "riscv-vector-builtins.def"
 };
 
@@ -2336,6 +2341,75 @@ register_builtin_type (vector_type_index type, tree eltype, machine_mode mode)
   lang_hooks.types.register_builtin_type (vectype, vector_types[type].abi_name);
 }
 
+/* Register the tuple type that contains NUM_VECTORS vectors of type TYPE.  */
+static void
+register_tuple_type (vector_type_index type, vector_type_index subpart_type,
+		     tree eltype, unsigned int nf)
+{
+  /* TODO: We currently just skip the register of the illegal RVV type.
+    Ideally, we should report error message more friendly instead of
+    reporting "unknown" type. Support more friendly error message in
+    the future.  */
+  if (!abi_vector_types[subpart_type])
+    return;
+  tree tuple_type = lang_hooks.types.make_type (RECORD_TYPE);
+
+  /* The contents of the type are opaque, so we can define them in any
+     way that maps to the correct ABI type.
+
+     Here we choose to use the same layout as for riscv_vector.h, with
+     "__val":
+
+	struct vfooxN_t { vfoo_t __val[N]; };
+
+     (It wouldn't be possible to write that directly in C or C++ for
+     sizeless types, but that's not a problem for this function.)
+
+     Using arrays simplifies the handling of vget and vset for variable
+     arguments.  */
+  tree array_type = build_array_type_nelts (abi_vector_types[subpart_type], nf);
+  gcc_assert (array_type);
+  gcc_assert (VECTOR_MODE_P (TYPE_MODE (array_type))
+	      && TYPE_MODE_RAW (array_type) == TYPE_MODE (array_type));
+
+  tree field = build_decl (input_location, FIELD_DECL, get_identifier ("__val"),
+			   array_type);
+  DECL_FIELD_CONTEXT (field) = tuple_type;
+  TYPE_FIELDS (tuple_type) = field;
+  add_vector_type_attribute (tuple_type, vector_types[type].mangled_name);
+  make_type_sizeless (tuple_type);
+  layout_type (tuple_type);
+  gcc_assert (VECTOR_MODE_P (TYPE_MODE (tuple_type))
+	      && TYPE_MODE_RAW (tuple_type) == TYPE_MODE (tuple_type));
+
+  tree decl
+    = build_decl (input_location, TYPE_DECL,
+		  get_identifier (vector_types[type].abi_name), tuple_type);
+  TYPE_NAME (tuple_type) = decl;
+  TYPE_STUB_DECL (tuple_type) = decl;
+  lang_hooks.decls.pushdecl (decl);
+  /* ??? Undo the effect of set_underlying_type for C.  The C frontend
+     doesn't recognize DECL as a built-in because (as intended) the decl has
+     a real location instead of BUILTINS_LOCATION.  The frontend therefore
+     treats the decl like a normal C "typedef struct foo foo;", expecting
+     the type for tag "struct foo" to have a dummy unnamed TYPE_DECL instead
+     of the named one we attached above.  It then sets DECL_ORIGINAL_TYPE
+     on the supposedly unnamed decl, creating a circularity that upsets
+     dwarf2out.
+
+     We don't want to follow the normal C model and create "struct foo"
+     tags for tuple types since (a) the types are supposed to be opaque
+     and (b) they couldn't be defined as a real struct anyway.  Treating
+     the TYPE_DECLs as "typedef struct foo foo;" without creating
+     "struct foo" would lead to confusing error messages.  */
+  DECL_ORIGINAL_TYPE (decl) = NULL_TREE;
+
+  builtin_types[type].scalar = eltype;
+  builtin_types[type].scalar_ptr = build_pointer_type (eltype);
+  builtin_types[type].scalar_const_ptr = build_const_pointer (eltype);
+  abi_vector_types[type] = tuple_type;
+}
+
 /* Register the built-in RVV ABI types, such as __rvv_int32m1_t.  */
 static void
 register_builtin_types ()
@@ -2358,6 +2432,10 @@ register_builtin_types ()
 	 : TARGET_MIN_VLEN >= 64 ? VECTOR_MODE_MIN_VLEN_64##mode               \
 				 : VECTOR_MODE_MIN_VLEN_32##mode;              \
   register_builtin_type (VECTOR_TYPE_##NAME, SCALAR_TYPE##_type_node, mode);
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE,  \
+			   NF, VECTOR_SUFFIX)                                  \
+  register_tuple_type (VECTOR_TYPE_##NAME, VECTOR_TYPE_##SUBPART_TYPE,         \
+		       SCALAR_TYPE##_type_node, NF);
 #include "riscv-vector-builtins.def"
 }
 
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
index 64c09b5d8cb..b0d6edda1b6 100644
--- a/gcc/config/riscv/riscv-vector-builtins.def
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -48,6 +48,11 @@ along with GCC; see the file COPYING3.  If not see
 		     VSETVL_SUFFIX)
 #endif
 
+#ifndef DEF_RVV_TUPLE_TYPE
+#define DEF_RVV_TUPLE_TYPE(NAME, NCHARS, ABI_NAME, SUBPART_TYPE, SCALAR_TYPE,  \
+			   NF, VECTOR_SUFFIX)
+#endif
+
 /* Use "DEF_RVV_OP_TYPE" macro to define RVV operand types.
    The 'NAME' will be concatenated into intrinsic function name.  */
 #ifndef DEF_RVV_OP_TYPE
@@ -323,6 +328,237 @@ DEF_RVV_TYPE (vfloat64m4_t, 17, __rvv_float64m4_t, double, VNx8DF, VNx4DF, VOID,
 DEF_RVV_TYPE (vfloat64m8_t, 17, __rvv_float64m8_t, double, VNx16DF, VNx8DF, VOID, _f64m8,
 	      _f64, _e64m8)
 
+/* Define tuple type for segment loads/stores, each tuple type should always satisfy
+   naming with vint<SEW><LMUL>x<NF>_t. Note that it's always LMUL * NF <= 8.  */
+/* Define tuple types for SEW = 8, LMUL = MF8.  */
+DEF_RVV_TUPLE_TYPE (vint8mf8x2_t, 17, __rvv_int8mf8x2_t, vint8mf8_t, int8, 2, _i8mf8x2)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x2_t, 18, __rvv_uint8mf8x2_t, vuint8mf8_t, uint8, 2, _u8mf8x2)
+DEF_RVV_TUPLE_TYPE (vint8mf8x3_t, 17, __rvv_int8mf8x3_t, vint8mf8_t, int8, 3, _i8mf8x3)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x3_t, 18, __rvv_uint8mf8x3_t, vuint8mf8_t, uint8, 3, _u8mf8x3)
+DEF_RVV_TUPLE_TYPE (vint8mf8x4_t, 17, __rvv_int8mf8x4_t, vint8mf8_t, int8, 4, _i8mf8x4)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x4_t, 18, __rvv_uint8mf8x4_t, vuint8mf8_t, uint8, 4, _u8mf8x4)
+DEF_RVV_TUPLE_TYPE (vint8mf8x5_t, 17, __rvv_int8mf8x5_t, vint8mf8_t, int8, 5, _i8mf8x5)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x5_t, 18, __rvv_uint8mf8x5_t, vuint8mf8_t, uint8, 5, _u8mf8x5)
+DEF_RVV_TUPLE_TYPE (vint8mf8x6_t, 17, __rvv_int8mf8x6_t, vint8mf8_t, int8, 6, _i8mf8x6)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x6_t, 18, __rvv_uint8mf8x6_t, vuint8mf8_t, uint8, 6, _u8mf8x6)
+DEF_RVV_TUPLE_TYPE (vint8mf8x7_t, 17, __rvv_int8mf8x7_t, vint8mf8_t, int8, 7, _i8mf8x7)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x7_t, 18, __rvv_uint8mf8x7_t, vuint8mf8_t, uint8, 7, _u8mf8x7)
+DEF_RVV_TUPLE_TYPE (vint8mf8x8_t, 17, __rvv_int8mf8x8_t, vint8mf8_t, int8, 8, _i8mf8x8)
+DEF_RVV_TUPLE_TYPE (vuint8mf8x8_t, 18, __rvv_uint8mf8x8_t, vuint8mf8_t, uint8, 8, _u8mf8x8)
+/* Define tuple types for SEW = 8, LMUL = MF4.  */
+DEF_RVV_TUPLE_TYPE (vint8mf4x2_t, 17, __rvv_int8mf4x2_t, vint8mf4_t, int8, 2, _i8mf4x2)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x2_t, 18, __rvv_uint8mf4x2_t, vuint8mf4_t, uint8, 2, _u8mf4x2)
+DEF_RVV_TUPLE_TYPE (vint8mf4x3_t, 17, __rvv_int8mf4x3_t, vint8mf4_t, int8, 3, _i8mf4x3)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x3_t, 18, __rvv_uint8mf4x3_t, vuint8mf4_t, uint8, 3, _u8mf4x3)
+DEF_RVV_TUPLE_TYPE (vint8mf4x4_t, 17, __rvv_int8mf4x4_t, vint8mf4_t, int8, 4, _i8mf4x4)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x4_t, 18, __rvv_uint8mf4x4_t, vuint8mf4_t, uint8, 4, _u8mf4x4)
+DEF_RVV_TUPLE_TYPE (vint8mf4x5_t, 17, __rvv_int8mf4x5_t, vint8mf4_t, int8, 5, _i8mf4x5)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x5_t, 18, __rvv_uint8mf4x5_t, vuint8mf4_t, uint8, 5, _u8mf4x5)
+DEF_RVV_TUPLE_TYPE (vint8mf4x6_t, 17, __rvv_int8mf4x6_t, vint8mf4_t, int8, 6, _i8mf4x6)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x6_t, 18, __rvv_uint8mf4x6_t, vuint8mf4_t, uint8, 6, _u8mf4x6)
+DEF_RVV_TUPLE_TYPE (vint8mf4x7_t, 17, __rvv_int8mf4x7_t, vint8mf4_t, int8, 7, _i8mf4x7)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x7_t, 18, __rvv_uint8mf4x7_t, vuint8mf4_t, uint8, 7, _u8mf4x7)
+DEF_RVV_TUPLE_TYPE (vint8mf4x8_t, 17, __rvv_int8mf4x8_t, vint8mf4_t, int8, 8, _i8mf4x8)
+DEF_RVV_TUPLE_TYPE (vuint8mf4x8_t, 18, __rvv_uint8mf4x8_t, vuint8mf4_t, uint8, 8, _u8mf4x8)
+/* Define tuple types for SEW = 8, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vint8mf2x2_t, 17, __rvv_int8mf2x2_t, vint8mf2_t, int8, 2, _i8mf2x2)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x2_t, 18, __rvv_uint8mf2x2_t, vuint8mf2_t, uint8, 2, _u8mf2x2)
+DEF_RVV_TUPLE_TYPE (vint8mf2x3_t, 17, __rvv_int8mf2x3_t, vint8mf2_t, int8, 3, _i8mf2x3)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x3_t, 18, __rvv_uint8mf2x3_t, vuint8mf2_t, uint8, 3, _u8mf2x3)
+DEF_RVV_TUPLE_TYPE (vint8mf2x4_t, 17, __rvv_int8mf2x4_t, vint8mf2_t, int8, 4, _i8mf2x4)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x4_t, 18, __rvv_uint8mf2x4_t, vuint8mf2_t, uint8, 4, _u8mf2x4)
+DEF_RVV_TUPLE_TYPE (vint8mf2x5_t, 17, __rvv_int8mf2x5_t, vint8mf2_t, int8, 5, _i8mf2x5)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x5_t, 18, __rvv_uint8mf2x5_t, vuint8mf2_t, uint8, 5, _u8mf2x5)
+DEF_RVV_TUPLE_TYPE (vint8mf2x6_t, 17, __rvv_int8mf2x6_t, vint8mf2_t, int8, 6, _i8mf2x6)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x6_t, 18, __rvv_uint8mf2x6_t, vuint8mf2_t, uint8, 6, _u8mf2x6)
+DEF_RVV_TUPLE_TYPE (vint8mf2x7_t, 17, __rvv_int8mf2x7_t, vint8mf2_t, int8, 7, _i8mf2x7)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x7_t, 18, __rvv_uint8mf2x7_t, vuint8mf2_t, uint8, 7, _u8mf2x7)
+DEF_RVV_TUPLE_TYPE (vint8mf2x8_t, 17, __rvv_int8mf2x8_t, vint8mf2_t, int8, 8, _i8mf2x8)
+DEF_RVV_TUPLE_TYPE (vuint8mf2x8_t, 18, __rvv_uint8mf2x8_t, vuint8mf2_t, uint8, 8, _u8mf2x8)
+/* Define tuple types for SEW = 8, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint8m1x2_t, 16, __rvv_int8m1x2_t, vint8m1_t, int8, 2, _i8m1x2)
+DEF_RVV_TUPLE_TYPE (vuint8m1x2_t, 17, __rvv_uint8m1x2_t, vuint8m1_t, uint8, 2, _u8m1x2)
+DEF_RVV_TUPLE_TYPE (vint8m1x3_t, 16, __rvv_int8m1x3_t, vint8m1_t, int8, 3, _i8m1x3)
+DEF_RVV_TUPLE_TYPE (vuint8m1x3_t, 17, __rvv_uint8m1x3_t, vuint8m1_t, uint8, 3, _u8m1x3)
+DEF_RVV_TUPLE_TYPE (vint8m1x4_t, 16, __rvv_int8m1x4_t, vint8m1_t, int8, 4, _i8m1x4)
+DEF_RVV_TUPLE_TYPE (vuint8m1x4_t, 17, __rvv_uint8m1x4_t, vuint8m1_t, uint8, 4, _u8m1x4)
+DEF_RVV_TUPLE_TYPE (vint8m1x5_t, 16, __rvv_int8m1x5_t, vint8m1_t, int8, 5, _i8m1x5)
+DEF_RVV_TUPLE_TYPE (vuint8m1x5_t, 17, __rvv_uint8m1x5_t, vuint8m1_t, uint8, 5, _u8m1x5)
+DEF_RVV_TUPLE_TYPE (vint8m1x6_t, 16, __rvv_int8m1x6_t, vint8m1_t, int8, 6, _i8m1x6)
+DEF_RVV_TUPLE_TYPE (vuint8m1x6_t, 17, __rvv_uint8m1x6_t, vuint8m1_t, uint8, 6, _u8m1x6)
+DEF_RVV_TUPLE_TYPE (vint8m1x7_t, 16, __rvv_int8m1x7_t, vint8m1_t, int8, 7, _i8m1x7)
+DEF_RVV_TUPLE_TYPE (vuint8m1x7_t, 17, __rvv_uint8m1x7_t, vuint8m1_t, uint8, 7, _u8m1x7)
+DEF_RVV_TUPLE_TYPE (vint8m1x8_t, 16, __rvv_int8m1x8_t, vint8m1_t, int8, 8, _i8m1x8)
+DEF_RVV_TUPLE_TYPE (vuint8m1x8_t, 17, __rvv_uint8m1x8_t, vuint8m1_t, uint8, 8, _u8m1x8)
+/* Define tuple types for SEW = 8, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint8m2x2_t, 16, __rvv_int8m2x2_t, vint8m2_t, int8, 2, _i8m2x2)
+DEF_RVV_TUPLE_TYPE (vuint8m2x2_t, 17, __rvv_uint8m2x2_t, vuint8m2_t, uint8, 2, _u8m2x2)
+DEF_RVV_TUPLE_TYPE (vint8m2x3_t, 16, __rvv_int8m2x3_t, vint8m2_t, int8, 3, _i8m2x3)
+DEF_RVV_TUPLE_TYPE (vuint8m2x3_t, 17, __rvv_uint8m2x3_t, vuint8m2_t, uint8, 3, _u8m2x3)
+DEF_RVV_TUPLE_TYPE (vint8m2x4_t, 16, __rvv_int8m2x4_t, vint8m2_t, int8, 4, _i8m2x4)
+DEF_RVV_TUPLE_TYPE (vuint8m2x4_t, 17, __rvv_uint8m2x4_t, vuint8m2_t, uint8, 4, _u8m2x4)
+/* Define tuple types for SEW = 8, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint8m4x2_t, 16, __rvv_int8m4x2_t, vint8m4_t, int8, 2, _i8m4x2)
+DEF_RVV_TUPLE_TYPE (vuint8m4x2_t, 17, __rvv_uint8m4x2_t, vuint8m4_t, uint8, 2, _u8m4x2)
+/* Define tuple types for SEW = 16, LMUL = MF4.  */
+DEF_RVV_TUPLE_TYPE (vint16mf4x2_t, 18, __rvv_int16mf4x2_t, vint16mf4_t, int16, 2, _i16mf4x2)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x2_t, 19, __rvv_uint16mf4x2_t, vuint16mf4_t, uint16, 2, _u16mf4x2)
+DEF_RVV_TUPLE_TYPE (vint16mf4x3_t, 18, __rvv_int16mf4x3_t, vint16mf4_t, int16, 3, _i16mf4x3)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x3_t, 19, __rvv_uint16mf4x3_t, vuint16mf4_t, uint16, 3, _u16mf4x3)
+DEF_RVV_TUPLE_TYPE (vint16mf4x4_t, 18, __rvv_int16mf4x4_t, vint16mf4_t, int16, 4, _i16mf4x4)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x4_t, 19, __rvv_uint16mf4x4_t, vuint16mf4_t, uint16, 4, _u16mf4x4)
+DEF_RVV_TUPLE_TYPE (vint16mf4x5_t, 18, __rvv_int16mf4x5_t, vint16mf4_t, int16, 5, _i16mf4x5)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x5_t, 19, __rvv_uint16mf4x5_t, vuint16mf4_t, uint16, 5, _u16mf4x5)
+DEF_RVV_TUPLE_TYPE (vint16mf4x6_t, 18, __rvv_int16mf4x6_t, vint16mf4_t, int16, 6, _i16mf4x6)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x6_t, 19, __rvv_uint16mf4x6_t, vuint16mf4_t, uint16, 6, _u16mf4x6)
+DEF_RVV_TUPLE_TYPE (vint16mf4x7_t, 18, __rvv_int16mf4x7_t, vint16mf4_t, int16, 7, _i16mf4x7)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x7_t, 19, __rvv_uint16mf4x7_t, vuint16mf4_t, uint16, 7, _u16mf4x7)
+DEF_RVV_TUPLE_TYPE (vint16mf4x8_t, 18, __rvv_int16mf4x8_t, vint16mf4_t, int16, 8, _i16mf4x8)
+DEF_RVV_TUPLE_TYPE (vuint16mf4x8_t, 19, __rvv_uint16mf4x8_t, vuint16mf4_t, uint16, 8, _u16mf4x8)
+/* Define tuple types for SEW = 16, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vint16mf2x2_t, 18, __rvv_int16mf2x2_t, vint16mf2_t, int16, 2, _i16mf2x2)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x2_t, 19, __rvv_uint16mf2x2_t, vuint16mf2_t, uint16, 2, _u16mf2x2)
+DEF_RVV_TUPLE_TYPE (vint16mf2x3_t, 18, __rvv_int16mf2x3_t, vint16mf2_t, int16, 3, _i16mf2x3)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x3_t, 19, __rvv_uint16mf2x3_t, vuint16mf2_t, uint16, 3, _u16mf2x3)
+DEF_RVV_TUPLE_TYPE (vint16mf2x4_t, 18, __rvv_int16mf2x4_t, vint16mf2_t, int16, 4, _i16mf2x4)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x4_t, 19, __rvv_uint16mf2x4_t, vuint16mf2_t, uint16, 4, _u16mf2x4)
+DEF_RVV_TUPLE_TYPE (vint16mf2x5_t, 18, __rvv_int16mf2x5_t, vint16mf2_t, int16, 5, _i16mf2x5)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x5_t, 19, __rvv_uint16mf2x5_t, vuint16mf2_t, uint16, 5, _u16mf2x5)
+DEF_RVV_TUPLE_TYPE (vint16mf2x6_t, 18, __rvv_int16mf2x6_t, vint16mf2_t, int16, 6, _i16mf2x6)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x6_t, 19, __rvv_uint16mf2x6_t, vuint16mf2_t, uint16, 6, _u16mf2x6)
+DEF_RVV_TUPLE_TYPE (vint16mf2x7_t, 18, __rvv_int16mf2x7_t, vint16mf2_t, int16, 7, _i16mf2x7)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x7_t, 19, __rvv_uint16mf2x7_t, vuint16mf2_t, uint16, 7, _u16mf2x7)
+DEF_RVV_TUPLE_TYPE (vint16mf2x8_t, 18, __rvv_int16mf2x8_t, vint16mf2_t, int16, 8, _i16mf2x8)
+DEF_RVV_TUPLE_TYPE (vuint16mf2x8_t, 19, __rvv_uint16mf2x8_t, vuint16mf2_t, uint16, 8, _u16mf2x8)
+/* Define tuple types for SEW = 16, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint16m1x2_t, 17, __rvv_int16m1x2_t, vint16m1_t, int16, 2, _i16m1x2)
+DEF_RVV_TUPLE_TYPE (vuint16m1x2_t, 18, __rvv_uint16m1x2_t, vuint16m1_t, uint16, 2, _u16m1x2)
+DEF_RVV_TUPLE_TYPE (vint16m1x3_t, 17, __rvv_int16m1x3_t, vint16m1_t, int16, 3, _i16m1x3)
+DEF_RVV_TUPLE_TYPE (vuint16m1x3_t, 18, __rvv_uint16m1x3_t, vuint16m1_t, uint16, 3, _u16m1x3)
+DEF_RVV_TUPLE_TYPE (vint16m1x4_t, 17, __rvv_int16m1x4_t, vint16m1_t, int16, 4, _i16m1x4)
+DEF_RVV_TUPLE_TYPE (vuint16m1x4_t, 18, __rvv_uint16m1x4_t, vuint16m1_t, uint16, 4, _u16m1x4)
+DEF_RVV_TUPLE_TYPE (vint16m1x5_t, 17, __rvv_int16m1x5_t, vint16m1_t, int16, 5, _i16m1x5)
+DEF_RVV_TUPLE_TYPE (vuint16m1x5_t, 18, __rvv_uint16m1x5_t, vuint16m1_t, uint16, 5, _u16m1x5)
+DEF_RVV_TUPLE_TYPE (vint16m1x6_t, 17, __rvv_int16m1x6_t, vint16m1_t, int16, 6, _i16m1x6)
+DEF_RVV_TUPLE_TYPE (vuint16m1x6_t, 18, __rvv_uint16m1x6_t, vuint16m1_t, uint16, 6, _u16m1x6)
+DEF_RVV_TUPLE_TYPE (vint16m1x7_t, 17, __rvv_int16m1x7_t, vint16m1_t, int16, 7, _i16m1x7)
+DEF_RVV_TUPLE_TYPE (vuint16m1x7_t, 18, __rvv_uint16m1x7_t, vuint16m1_t, uint16, 7, _u16m1x7)
+DEF_RVV_TUPLE_TYPE (vint16m1x8_t, 17, __rvv_int16m1x8_t, vint16m1_t, int16, 8, _i16m1x8)
+DEF_RVV_TUPLE_TYPE (vuint16m1x8_t, 18, __rvv_uint16m1x8_t, vuint16m1_t, uint16, 8, _u16m1x8)
+/* Define tuple types for SEW = 16, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint16m2x2_t, 17, __rvv_int16m2x2_t, vint16m2_t, int16, 2, _i16m2x2)
+DEF_RVV_TUPLE_TYPE (vuint16m2x2_t, 18, __rvv_uint16m2x2_t, vuint16m2_t, uint16, 2, _u16m2x2)
+DEF_RVV_TUPLE_TYPE (vint16m2x3_t, 17, __rvv_int16m2x3_t, vint16m2_t, int16, 3, _i16m2x3)
+DEF_RVV_TUPLE_TYPE (vuint16m2x3_t, 18, __rvv_uint16m2x3_t, vuint16m2_t, uint16, 3, _u16m2x3)
+DEF_RVV_TUPLE_TYPE (vint16m2x4_t, 17, __rvv_int16m2x4_t, vint16m2_t, int16, 4, _i16m2x4)
+DEF_RVV_TUPLE_TYPE (vuint16m2x4_t, 18, __rvv_uint16m2x4_t, vuint16m2_t, uint16, 4, _u16m2x4)
+/* Define tuple types for SEW = 16, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint16m4x2_t, 17, __rvv_int16m4x2_t, vint16m4_t, int16, 2, _i16m4x2)
+DEF_RVV_TUPLE_TYPE (vuint16m4x2_t, 18, __rvv_uint16m4x2_t, vuint16m4_t, uint16, 2, _u16m4x2)
+/* Define tuple types for SEW = 32, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vint32mf2x2_t, 18, __rvv_int32mf2x2_t, vint32mf2_t, int32, 2, _i32mf2x2)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x2_t, 19, __rvv_uint32mf2x2_t, vuint32mf2_t, uint32, 2, _u32mf2x2)
+DEF_RVV_TUPLE_TYPE (vint32mf2x3_t, 18, __rvv_int32mf2x3_t, vint32mf2_t, int32, 3, _i32mf2x3)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x3_t, 19, __rvv_uint32mf2x3_t, vuint32mf2_t, uint32, 3, _u32mf2x3)
+DEF_RVV_TUPLE_TYPE (vint32mf2x4_t, 18, __rvv_int32mf2x4_t, vint32mf2_t, int32, 4, _i32mf2x4)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x4_t, 19, __rvv_uint32mf2x4_t, vuint32mf2_t, uint32, 4, _u32mf2x4)
+DEF_RVV_TUPLE_TYPE (vint32mf2x5_t, 18, __rvv_int32mf2x5_t, vint32mf2_t, int32, 5, _i32mf2x5)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x5_t, 19, __rvv_uint32mf2x5_t, vuint32mf2_t, uint32, 5, _u32mf2x5)
+DEF_RVV_TUPLE_TYPE (vint32mf2x6_t, 18, __rvv_int32mf2x6_t, vint32mf2_t, int32, 6, _i32mf2x6)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x6_t, 19, __rvv_uint32mf2x6_t, vuint32mf2_t, uint32, 6, _u32mf2x6)
+DEF_RVV_TUPLE_TYPE (vint32mf2x7_t, 18, __rvv_int32mf2x7_t, vint32mf2_t, int32, 7, _i32mf2x7)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x7_t, 19, __rvv_uint32mf2x7_t, vuint32mf2_t, uint32, 7, _u32mf2x7)
+DEF_RVV_TUPLE_TYPE (vint32mf2x8_t, 18, __rvv_int32mf2x8_t, vint32mf2_t, int32, 8, _i32mf2x8)
+DEF_RVV_TUPLE_TYPE (vuint32mf2x8_t, 19, __rvv_uint32mf2x8_t, vuint32mf2_t, uint32, 8, _u32mf2x8)
+/* Define tuple types for SEW = 32, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint32m1x2_t, 17, __rvv_int32m1x2_t, vint32m1_t, int32, 2, _i32m1x2)
+DEF_RVV_TUPLE_TYPE (vuint32m1x2_t, 18, __rvv_uint32m1x2_t, vuint32m1_t, uint32, 2, _u32m1x2)
+DEF_RVV_TUPLE_TYPE (vint32m1x3_t, 17, __rvv_int32m1x3_t, vint32m1_t, int32, 3, _i32m1x3)
+DEF_RVV_TUPLE_TYPE (vuint32m1x3_t, 18, __rvv_uint32m1x3_t, vuint32m1_t, uint32, 3, _u32m1x3)
+DEF_RVV_TUPLE_TYPE (vint32m1x4_t, 17, __rvv_int32m1x4_t, vint32m1_t, int32, 4, _i32m1x4)
+DEF_RVV_TUPLE_TYPE (vuint32m1x4_t, 18, __rvv_uint32m1x4_t, vuint32m1_t, uint32, 4, _u32m1x4)
+DEF_RVV_TUPLE_TYPE (vint32m1x5_t, 17, __rvv_int32m1x5_t, vint32m1_t, int32, 5, _i32m1x5)
+DEF_RVV_TUPLE_TYPE (vuint32m1x5_t, 18, __rvv_uint32m1x5_t, vuint32m1_t, uint32, 5, _u32m1x5)
+DEF_RVV_TUPLE_TYPE (vint32m1x6_t, 17, __rvv_int32m1x6_t, vint32m1_t, int32, 6, _i32m1x6)
+DEF_RVV_TUPLE_TYPE (vuint32m1x6_t, 18, __rvv_uint32m1x6_t, vuint32m1_t, uint32, 6, _u32m1x6)
+DEF_RVV_TUPLE_TYPE (vint32m1x7_t, 17, __rvv_int32m1x7_t, vint32m1_t, int32, 7, _i32m1x7)
+DEF_RVV_TUPLE_TYPE (vuint32m1x7_t, 18, __rvv_uint32m1x7_t, vuint32m1_t, uint32, 7, _u32m1x7)
+DEF_RVV_TUPLE_TYPE (vint32m1x8_t, 17, __rvv_int32m1x8_t, vint32m1_t, int32, 8, _i32m1x8)
+DEF_RVV_TUPLE_TYPE (vuint32m1x8_t, 18, __rvv_uint32m1x8_t, vuint32m1_t, uint32, 8, _u32m1x8)
+/* Define tuple types for SEW = 32, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint32m2x2_t, 17, __rvv_int32m2x2_t, vint32m2_t, int32, 2, _i32m2x2)
+DEF_RVV_TUPLE_TYPE (vuint32m2x2_t, 18, __rvv_uint32m2x2_t, vuint32m2_t, uint32, 2, _u32m2x2)
+DEF_RVV_TUPLE_TYPE (vint32m2x3_t, 17, __rvv_int32m2x3_t, vint32m2_t, int32, 3, _i32m2x3)
+DEF_RVV_TUPLE_TYPE (vuint32m2x3_t, 18, __rvv_uint32m2x3_t, vuint32m2_t, uint32, 3, _u32m2x3)
+DEF_RVV_TUPLE_TYPE (vint32m2x4_t, 17, __rvv_int32m2x4_t, vint32m2_t, int32, 4, _i32m2x4)
+DEF_RVV_TUPLE_TYPE (vuint32m2x4_t, 18, __rvv_uint32m2x4_t, vuint32m2_t, uint32, 4, _u32m2x4)
+/* Define tuple types for SEW = 32, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint32m4x2_t, 17, __rvv_int32m4x2_t, vint32m4_t, int32, 2, _i32m4x2)
+DEF_RVV_TUPLE_TYPE (vuint32m4x2_t, 18, __rvv_uint32m4x2_t, vuint32m4_t, uint32, 2, _u32m4x2)
+/* Define tuple types for SEW = 64, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vint64m1x2_t, 17, __rvv_int64m1x2_t, vint64m1_t, int64, 2, _i64m1x2)
+DEF_RVV_TUPLE_TYPE (vuint64m1x2_t, 18, __rvv_uint64m1x2_t, vuint64m1_t, uint64, 2, _u64m1x2)
+DEF_RVV_TUPLE_TYPE (vint64m1x3_t, 17, __rvv_int64m1x3_t, vint64m1_t, int64, 3, _i64m1x3)
+DEF_RVV_TUPLE_TYPE (vuint64m1x3_t, 18, __rvv_uint64m1x3_t, vuint64m1_t, uint64, 3, _u64m1x3)
+DEF_RVV_TUPLE_TYPE (vint64m1x4_t, 17, __rvv_int64m1x4_t, vint64m1_t, int64, 4, _i64m1x4)
+DEF_RVV_TUPLE_TYPE (vuint64m1x4_t, 18, __rvv_uint64m1x4_t, vuint64m1_t, uint64, 4, _u64m1x4)
+DEF_RVV_TUPLE_TYPE (vint64m1x5_t, 17, __rvv_int64m1x5_t, vint64m1_t, int64, 5, _i64m1x5)
+DEF_RVV_TUPLE_TYPE (vuint64m1x5_t, 18, __rvv_uint64m1x5_t, vuint64m1_t, uint64, 5, _u64m1x5)
+DEF_RVV_TUPLE_TYPE (vint64m1x6_t, 17, __rvv_int64m1x6_t, vint64m1_t, int64, 6, _i64m1x6)
+DEF_RVV_TUPLE_TYPE (vuint64m1x6_t, 18, __rvv_uint64m1x6_t, vuint64m1_t, uint64, 6, _u64m1x6)
+DEF_RVV_TUPLE_TYPE (vint64m1x7_t, 17, __rvv_int64m1x7_t, vint64m1_t, int64, 7, _i64m1x7)
+DEF_RVV_TUPLE_TYPE (vuint64m1x7_t, 18, __rvv_uint64m1x7_t, vuint64m1_t, uint64, 7, _u64m1x7)
+DEF_RVV_TUPLE_TYPE (vint64m1x8_t, 17, __rvv_int64m1x8_t, vint64m1_t, int64, 8, _i64m1x8)
+DEF_RVV_TUPLE_TYPE (vuint64m1x8_t, 18, __rvv_uint64m1x8_t, vuint64m1_t, uint64, 8, _u64m1x8)
+/* Define tuple types for SEW = 64, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vint64m2x2_t, 17, __rvv_int64m2x2_t, vint64m2_t, int64, 2, _i64m2x2)
+DEF_RVV_TUPLE_TYPE (vuint64m2x2_t, 18, __rvv_uint64m2x2_t, vuint64m2_t, uint64, 2, _u64m2x2)
+DEF_RVV_TUPLE_TYPE (vint64m2x3_t, 17, __rvv_int64m2x3_t, vint64m2_t, int64, 3, _i64m2x3)
+DEF_RVV_TUPLE_TYPE (vuint64m2x3_t, 18, __rvv_uint64m2x3_t, vuint64m2_t, uint64, 3, _u64m2x3)
+DEF_RVV_TUPLE_TYPE (vint64m2x4_t, 17, __rvv_int64m2x4_t, vint64m2_t, int64, 4, _i64m2x4)
+DEF_RVV_TUPLE_TYPE (vuint64m2x4_t, 18, __rvv_uint64m2x4_t, vuint64m2_t, uint64, 4, _u64m2x4)
+/* Define tuple types for SEW = 64, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vint64m4x2_t, 17, __rvv_int64m4x2_t, vint64m4_t, int64, 2, _i64m4x2)
+DEF_RVV_TUPLE_TYPE (vuint64m4x2_t, 18, __rvv_uint64m4x2_t, vuint64m4_t, uint64, 2, _u64m4x2)
+
+/* Define floating-point tuple types.  */
+/* Define tuple types for SEW = 32, LMUL = MF2.  */
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x2_t, 20, __rvv_float32mf2x2_t, vfloat32mf2_t, float, 2, _f32mf2x2)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x3_t, 20, __rvv_float32mf2x3_t, vfloat32mf2_t, float, 3, _f32mf2x3)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x4_t, 20, __rvv_float32mf2x4_t, vfloat32mf2_t, float, 4, _f32mf2x4)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x5_t, 20, __rvv_float32mf2x5_t, vfloat32mf2_t, float, 5, _f32mf2x5)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x6_t, 20, __rvv_float32mf2x6_t, vfloat32mf2_t, float, 6, _f32mf2x6)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x7_t, 20, __rvv_float32mf2x7_t, vfloat32mf2_t, float, 7, _f32mf2x7)
+DEF_RVV_TUPLE_TYPE (vfloat32mf2x8_t, 20, __rvv_float32mf2x8_t, vfloat32mf2_t, float, 8, _f32mf2x8)
+/* Define tuple types for SEW = 32, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vfloat32m1x2_t, 19, __rvv_float32m1x2_t, vfloat32m1_t, double, 2, _f32m1x2)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x3_t, 19, __rvv_float32m1x3_t, vfloat32m1_t, double, 3, _f32m1x3)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x4_t, 19, __rvv_float32m1x4_t, vfloat32m1_t, double, 4, _f32m1x4)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x5_t, 19, __rvv_float32m1x5_t, vfloat32m1_t, double, 5, _f32m1x5)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x6_t, 19, __rvv_float32m1x6_t, vfloat32m1_t, double, 6, _f32m1x6)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x7_t, 19, __rvv_float32m1x7_t, vfloat32m1_t, double, 7, _f32m1x7)
+DEF_RVV_TUPLE_TYPE (vfloat32m1x8_t, 19, __rvv_float32m1x8_t, vfloat32m1_t, double, 8, _f32m1x8)
+/* Define tuple types for SEW = 32, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vfloat32m2x2_t, 19, __rvv_float32m2x2_t, vfloat32m2_t, double, 2, _f32m2x2)
+DEF_RVV_TUPLE_TYPE (vfloat32m2x3_t, 19, __rvv_float32m2x3_t, vfloat32m2_t, double, 3, _f32m2x3)
+DEF_RVV_TUPLE_TYPE (vfloat32m2x4_t, 19, __rvv_float32m2x4_t, vfloat32m2_t, double, 4, _f32m2x4)
+/* Define tuple types for SEW = 32, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vfloat32m4x2_t, 19, __rvv_float32m4x2_t, vfloat32m4_t, double, 2, _f32m4x2)
+/* Define tuple types for SEW = 64, LMUL = M1.  */
+DEF_RVV_TUPLE_TYPE (vfloat64m1x2_t, 19, __rvv_float64m1x2_t, vfloat64m1_t, double, 2, _f64m1x2)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x3_t, 19, __rvv_float64m1x3_t, vfloat64m1_t, double, 3, _f64m1x3)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x4_t, 19, __rvv_float64m1x4_t, vfloat64m1_t, double, 4, _f64m1x4)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x5_t, 19, __rvv_float64m1x5_t, vfloat64m1_t, double, 5, _f64m1x5)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x6_t, 19, __rvv_float64m1x6_t, vfloat64m1_t, double, 6, _f64m1x6)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x7_t, 19, __rvv_float64m1x7_t, vfloat64m1_t, double, 7, _f64m1x7)
+DEF_RVV_TUPLE_TYPE (vfloat64m1x8_t, 19, __rvv_float64m1x8_t, vfloat64m1_t, double, 8, _f64m1x8)
+/* Define tuple types for SEW = 64, LMUL = M2.  */
+DEF_RVV_TUPLE_TYPE (vfloat64m2x2_t, 19, __rvv_float64m2x2_t, vfloat64m2_t, double, 2, _f64m2x2)
+DEF_RVV_TUPLE_TYPE (vfloat64m2x3_t, 19, __rvv_float64m2x3_t, vfloat64m2_t, double, 3, _f64m2x3)
+DEF_RVV_TUPLE_TYPE (vfloat64m2x4_t, 19, __rvv_float64m2x4_t, vfloat64m2_t, double, 4, _f64m2x4)
+/* Define tuple types for SEW = 64, LMUL = M4.  */
+DEF_RVV_TUPLE_TYPE (vfloat64m4x2_t, 19, __rvv_float64m4x2_t, vfloat64m4_t, double, 2, _f64m4x2)
+
 DEF_RVV_OP_TYPE (vv)
 DEF_RVV_OP_TYPE (vx)
 DEF_RVV_OP_TYPE (v)
@@ -417,5 +653,6 @@ DEF_RVV_BASE_TYPE (size_ptr, build_pointer_type (size_type_node))
 #undef DEF_RVV_PRED_TYPE
 #undef DEF_RVV_OP_TYPE
 #undef DEF_RVV_TYPE
+#undef DEF_RVV_TUPLE_TYPE
 #undef DEF_RVV_BASE_TYPE
 #undef DEF_RVV_TYPE_INDEX
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 8ffb9d33e33..93261a72134 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -123,6 +123,7 @@ enum operand_type_index
 enum vector_type_index
 {
 #define DEF_RVV_TYPE(NAME, ABI_NAME, NCHARS, ARGS...) VECTOR_TYPE_##NAME,
+#define DEF_RVV_TUPLE_TYPE(NAME, ABI_NAME, NCHARS, ARGS...) VECTOR_TYPE_##NAME,
 #include "riscv-vector-builtins.def"
   NUM_VECTOR_TYPES,
   VECTOR_TYPE_INVALID = NUM_VECTOR_TYPES
diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index 8aae22d3259..4b1c32de0a3 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -84,6 +84,12 @@ TODO: FP16 vector needs support of 'zvfh', we don't support it yet.  */
 	      VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64,                      \
 	      VLMUL_FOR_MIN_VLEN128, RATIO_FOR_MIN_VLEN128)
 #endif
+#ifndef TUPLE_ENTRY
+#define TUPLE_ENTRY(MODE, REQUIREMENT, SUBPART_MODE, NF, VLMUL_FOR_MIN_VLEN32, \
+		    RATIO_FOR_MIN_VLEN32, VLMUL_FOR_MIN_VLEN64,                \
+		    RATIO_FOR_MIN_VLEN64, VLMUL_FOR_MIN_VLEN128,               \
+		    RATIO_FOR_MIN_VLEN128)
+#endif
 
 /* Mask modes. Disable VNx64BImode when TARGET_MIN_VLEN == 32.  */
 ENTRY (VNx128BI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 1)
@@ -157,4 +163,174 @@ ENTRY (VNx4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 3
 ENTRY (VNx2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
 ENTRY (VNx1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
 
+/* Enable or disable the tuple type. BASE_MODE is the base vector mode of the
+   tuple mode. For example, the BASE_MODE of VNx2x1SImode is VNx1SImode. ALL
+   tuple modes should always satisfy NF * BASE_MODE LMUL <= 8.  */
+
+/* Tuple modes for EEW = 8.  */
+TUPLE_ENTRY (VNx2x64QI, TARGET_MIN_VLEN >= 128, VNx64QI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 2)
+TUPLE_ENTRY (VNx2x32QI, TARGET_MIN_VLEN >= 64, VNx32QI, 2, LMUL_RESERVED, 0, LMUL_4, 2, LMUL_2, 4)
+TUPLE_ENTRY (VNx3x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
+TUPLE_ENTRY (VNx4x32QI, TARGET_MIN_VLEN >= 128, VNx32QI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 4)
+TUPLE_ENTRY (VNx2x16QI, true, VNx16QI, 2, LMUL_4, 2, LMUL_2, 4, LMUL_1, 8)
+TUPLE_ENTRY (VNx3x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 3, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
+TUPLE_ENTRY (VNx4x16QI, TARGET_MIN_VLEN >= 64, VNx16QI, 4, LMUL_RESERVED, 0, LMUL_2, 4, LMUL_1, 8)
+TUPLE_ENTRY (VNx5x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx6x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx7x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx8x16QI, TARGET_MIN_VLEN >= 128, VNx16QI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 8)
+TUPLE_ENTRY (VNx2x8QI, true, VNx8QI, 2, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx3x8QI, true, VNx8QI, 3, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx4x8QI, true, VNx8QI, 4, LMUL_2, 4, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx5x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 5, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx6x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 6, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx7x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 7, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx8x8QI, TARGET_MIN_VLEN >= 64, VNx8QI, 8, LMUL_RESERVED, 0, LMUL_1, 8, LMUL_F2, 16)
+TUPLE_ENTRY (VNx2x4QI, true, VNx4QI, 2, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx3x4QI, true, VNx4QI, 3, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx4x4QI, true, VNx4QI, 4, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx5x4QI, true, VNx4QI, 5, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx6x4QI, true, VNx4QI, 6, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx7x4QI, true, VNx4QI, 7, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx8x4QI, true, VNx4QI, 8, LMUL_1, 8, LMUL_F2, 16, LMUL_F4, 32)
+TUPLE_ENTRY (VNx2x2QI, true, VNx2QI, 2, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx3x2QI, true, VNx2QI, 3, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx4x2QI, true, VNx2QI, 4, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx5x2QI, true, VNx2QI, 5, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx6x2QI, true, VNx2QI, 6, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx7x2QI, true, VNx2QI, 7, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx8x2QI, true, VNx2QI, 8, LMUL_F2, 16, LMUL_F4, 32, LMUL_F8, 64)
+TUPLE_ENTRY (VNx2x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 2, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 3, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 4, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 5, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 6, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 7, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1QI, TARGET_MIN_VLEN < 128, VNx1QI, 8, LMUL_F4, 32, LMUL_F8, 64, LMUL_RESERVED, 0)
+
+/* Tuple modes for EEW = 16.  */
+TUPLE_ENTRY (VNx2x32HI, TARGET_MIN_VLEN >= 128, VNx32HI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 4)
+TUPLE_ENTRY (VNx2x16HI, TARGET_MIN_VLEN >= 64, VNx16HI, 2, LMUL_RESERVED, 0, LMUL_4, 4, LMUL_2, 8)
+TUPLE_ENTRY (VNx3x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
+TUPLE_ENTRY (VNx4x16HI, TARGET_MIN_VLEN >= 128, VNx16HI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 8)
+TUPLE_ENTRY (VNx2x8HI, true, VNx8HI, 2, LMUL_4, 4, LMUL_2, 8, LMUL_1, 16)
+TUPLE_ENTRY (VNx3x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 3, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
+TUPLE_ENTRY (VNx4x8HI, TARGET_MIN_VLEN >= 64, VNx8HI, 4, LMUL_RESERVED, 0, LMUL_2, 8, LMUL_1, 16)
+TUPLE_ENTRY (VNx5x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx6x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx7x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx8x8HI, TARGET_MIN_VLEN >= 128, VNx8HI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 16)
+TUPLE_ENTRY (VNx2x4HI, true, VNx4HI, 2, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx3x4HI, true, VNx4HI, 3, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx4x4HI, true, VNx4HI, 4, LMUL_2, 8, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx5x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 5, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx6x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 6, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx7x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 7, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx8x4HI, TARGET_MIN_VLEN >= 64, VNx4HI, 8, LMUL_RESERVED, 0, LMUL_1, 16, LMUL_F2, 32)
+TUPLE_ENTRY (VNx2x2HI, true, VNx2HI, 2, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx3x2HI, true, VNx2HI, 3, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx4x2HI, true, VNx2HI, 4, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx5x2HI, true, VNx2HI, 5, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx6x2HI, true, VNx2HI, 6, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx7x2HI, true, VNx2HI, 7, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx8x2HI, true, VNx2HI, 8, LMUL_1, 16, LMUL_F2, 32, LMUL_F4, 64)
+TUPLE_ENTRY (VNx2x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 2, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 3, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 4, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 5, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 6, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 7, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1HI, TARGET_MIN_VLEN < 128, VNx1HI, 8, LMUL_F2, 32, LMUL_F4, 64, LMUL_RESERVED, 0)
+
+/* Tuple modes for EEW = 32.  */
+TUPLE_ENTRY (VNx2x16SI, TARGET_MIN_VLEN >= 128, VNx16SI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
+TUPLE_ENTRY (VNx2x8SI, TARGET_MIN_VLEN >= 64, VNx8SI, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
+TUPLE_ENTRY (VNx3x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx4x8SI, TARGET_MIN_VLEN >= 128, VNx8SI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx2x4SI, true, VNx4SI, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx3x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx4x4SI, TARGET_MIN_VLEN >= 64, VNx4SI, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx5x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx6x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx7x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx8x4SI, TARGET_MIN_VLEN >= 128, VNx4SI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx2x2SI, true, VNx2SI, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx3x2SI, true, VNx2SI, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx4x2SI, true, VNx2SI, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx5x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx6x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx7x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx8x2SI, TARGET_MIN_VLEN >= 64, VNx2SI, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx2x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1SI, TARGET_MIN_VLEN < 128, VNx1SI, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx2x16SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, VNx16SF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 8)
+TUPLE_ENTRY (VNx2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx8SF, 2, LMUL_RESERVED, 0, LMUL_4, 8, LMUL_2, 16)
+TUPLE_ENTRY (VNx3x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx4x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx8SF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 16)
+TUPLE_ENTRY (VNx2x4SF, TARGET_VECTOR_ELEN_FP_32, VNx4SF, 2, LMUL_4, 8, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx3x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 3, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx4x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx4SF, 4, LMUL_RESERVED, 0, LMUL_2, 16, LMUL_1, 32)
+TUPLE_ENTRY (VNx5x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx6x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx7x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx8x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128, VNx4SF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 32)
+TUPLE_ENTRY (VNx2x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 2, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx3x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 3, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx4x2SF, TARGET_VECTOR_ELEN_FP_32, VNx2SF, 4, LMUL_2, 16, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx5x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 5, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx6x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 6, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx7x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 7, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx8x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64, VNx2SF, 8, LMUL_RESERVED, 0, LMUL_1, 32, LMUL_F2, 64)
+TUPLE_ENTRY (VNx2x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 2, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 3, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 4, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 5, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 6, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 7, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128, VNx1SF, 8, LMUL_1, 32, LMUL_F2, 64, LMUL_RESERVED, 0)
+
+/* Tuple modes for EEW = 64.  */
+TUPLE_ENTRY (VNx2x8DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx8DI, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
+TUPLE_ENTRY (VNx2x4DI, TARGET_VECTOR_ELEN_64, VNx4DI, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
+TUPLE_ENTRY (VNx3x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx4x4DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx4DI, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx2x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx3x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx4x2DI, TARGET_VECTOR_ELEN_64, VNx2DI, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx5x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx6x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx7x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx8x2DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, VNx2DI, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx2x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, VNx1DI, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx2x8DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx8DF, 2, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_4, 16)
+TUPLE_ENTRY (VNx2x4DF, TARGET_VECTOR_ELEN_FP_64, VNx4DF, 2, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
+TUPLE_ENTRY (VNx3x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 3, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx4x4DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx4DF, 4, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_2, 32)
+TUPLE_ENTRY (VNx2x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 2, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx3x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 3, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx4x2DF, TARGET_VECTOR_ELEN_FP_64, VNx2DF, 4, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+TUPLE_ENTRY (VNx5x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 5, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx6x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 6, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx7x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 7, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx8x2DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128, VNx2DF, 8, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_1, 64)
+TUPLE_ENTRY (VNx2x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 2, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx3x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 3, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx4x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 4, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx5x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 5, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx6x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 6, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx7x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 7, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+TUPLE_ENTRY (VNx8x1DF, TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128, VNx1DF, 8, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+
 #undef ENTRY
+#undef TUPLE_ENTRY
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index a0b32a247b6..032383167a0 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -992,13 +992,39 @@ riscv_v_ext_vector_mode_p (machine_mode mode)
   return false;
 }
 
+/* Return true if mode is the RVV enabled tuple mode.  */
+
+bool
+riscv_v_ext_tuple_mode_p (machine_mode mode)
+{
+#define TUPLE_ENTRY(MODE, REQUIREMENT, ...)                                    \
+  case MODE##mode:                                                             \
+    return REQUIREMENT;
+  switch (mode)
+    {
+#include "riscv-vector-switch.def"
+    default:
+      return false;
+    }
+
+  return false;
+}
+
+/* Return true if it is either RVV vector mode or RVV tuple mode.  */
+
+static bool
+riscv_v_ext_mode_p (machine_mode mode)
+{
+  return riscv_v_ext_vector_mode_p (mode) || riscv_v_ext_tuple_mode_p (mode);
+}
+
 /* Call from ADJUST_NUNITS in riscv-modes.def. Return the correct
    NUNITS size for corresponding machine_mode.  */
 
 poly_int64
 riscv_v_adjust_nunits (machine_mode mode, int scale)
 {
-  if (riscv_v_ext_vector_mode_p (mode))
+  if (riscv_v_ext_mode_p (mode))
     return riscv_vector_chunks * scale;
   return scale;
 }
@@ -1056,7 +1082,7 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 
     case PLUS:
       /* RVV load/store disallow any offset.  */
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       info->type = ADDRESS_REG;
@@ -1067,7 +1093,7 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 
     case LO_SUM:
       /* RVV load/store disallow LO_SUM.  */
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       info->type = ADDRESS_LO_SUM;
@@ -1089,7 +1115,7 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 
     case CONST_INT:
       /* RVV load/store disallow CONST_INT.  */
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       /* Small-integer addresses don't occur very often, but they
@@ -2221,7 +2247,7 @@ riscv_immediate_operand_p (int code, HOST_WIDE_INT x)
 static int
 riscv_binary_cost (rtx x, int single_insns, int double_insns)
 {
-  if (!riscv_v_ext_vector_mode_p (GET_MODE (x))
+  if (!riscv_v_ext_mode_p (GET_MODE (x))
       && GET_MODE_SIZE (GET_MODE (x)).to_constant () == UNITS_PER_WORD * 2)
     return COSTS_N_INSNS (double_insns);
   return COSTS_N_INSNS (single_insns);
@@ -2271,7 +2297,7 @@ riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UN
 {
   /* TODO: We set RVV instruction cost as 1 by default.
      Cost Model need to be well analyzed and supported in the future. */
-  if (riscv_v_ext_vector_mode_p (mode))
+  if (riscv_v_ext_mode_p (mode))
     {
       *total = COSTS_N_INSNS (1);
       return true;
@@ -5885,7 +5911,7 @@ static bool
 riscv_secondary_memory_needed (machine_mode mode, reg_class_t class1,
 			       reg_class_t class2)
 {
-  return (!riscv_v_ext_vector_mode_p (mode)
+  return (!riscv_v_ext_mode_p (mode)
 	  && GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD
 	  && (class1 == FP_REGS) != (class2 == FP_REGS)
 	  && !TARGET_XTHEADFMV);
@@ -5919,6 +5945,22 @@ riscv_hard_regno_nregs (unsigned int regno, machine_mode mode)
       return exact_div (GET_MODE_SIZE (mode), UNITS_PER_V_REG).to_constant ();
     }
 
+  /* For tuple modes, the number of register = NF * LMUL.  */
+  if (riscv_v_ext_tuple_mode_p (mode))
+    {
+      unsigned int nf = riscv_vector::get_nf (mode);
+      machine_mode subpart_mode = riscv_vector::get_subpart_mode (mode);
+      poly_int64 size = GET_MODE_SIZE (subpart_mode);
+      gcc_assert (known_eq (size * nf, GET_MODE_SIZE (mode)));
+      if (maybe_lt (size, UNITS_PER_V_REG))
+	return nf;
+      else
+	{
+	  unsigned int lmul = exact_div (size, UNITS_PER_V_REG).to_constant ();
+	  return nf * lmul;
+	}
+    }
+
   /* mode for VL or VTYPE are just a marker, not holding value,
      so it always consume one register.  */
   if (regno == VTYPE_REGNUM || regno == VL_REGNUM)
@@ -5944,7 +5986,7 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
 
   if (GP_REG_P (regno))
     {
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       if (!GP_REG_P (regno + nregs - 1))
@@ -5952,7 +5994,7 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
     }
   else if (FP_REG_P (regno))
     {
-      if (riscv_v_ext_vector_mode_p (mode))
+      if (riscv_v_ext_mode_p (mode))
 	return false;
 
       if (!FP_REG_P (regno + nregs - 1))
@@ -5971,7 +6013,7 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
     }
   else if (V_REG_P (regno))
     {
-      if (!riscv_v_ext_vector_mode_p (mode))
+      if (!riscv_v_ext_mode_p (mode))
 	return false;
 
       if (!V_REG_P (regno + nregs - 1))
@@ -5980,8 +6022,12 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
       /* 3.3.2. LMUL = 2,4,8, register numbers should be multiple of 2,4,8.
 	 but for mask vector register, register numbers can be any number. */
       int lmul = 1;
-      if (known_gt (GET_MODE_SIZE (mode), UNITS_PER_V_REG))
-	lmul = exact_div (GET_MODE_SIZE (mode), UNITS_PER_V_REG).to_constant ();
+      machine_mode rvv_mode = mode;
+      if (riscv_v_ext_tuple_mode_p (rvv_mode))
+	rvv_mode = riscv_vector::get_subpart_mode (rvv_mode);
+      poly_int64 size = GET_MODE_SIZE (rvv_mode);
+      if (known_gt (size, UNITS_PER_V_REG))
+	lmul = exact_div (size, UNITS_PER_V_REG).to_constant ();
       if (lmul != 1)
 	return ((regno % lmul) == 0);
     }
@@ -7004,7 +7050,7 @@ static bool
 riscv_vector_mode_supported_p (machine_mode mode)
 {
   if (TARGET_VECTOR)
-    return riscv_v_ext_vector_mode_p (mode);
+    return riscv_v_ext_mode_p (mode);
 
   return false;
 }
@@ -7046,8 +7092,17 @@ riscv_regmode_natural_size (machine_mode mode)
      anything smaller than that.  */
   /* ??? For now, only do this for variable-width RVV registers.
      Doing it for constant-sized registers breaks lower-subreg.c.  */
-  if (!riscv_vector_chunks.is_constant () && riscv_v_ext_vector_mode_p (mode))
-    return BYTES_PER_RISCV_VECTOR;
+  if (!riscv_vector_chunks.is_constant () && riscv_v_ext_mode_p (mode))
+    {
+      if (riscv_v_ext_tuple_mode_p (mode))
+	{
+	  poly_uint64 size
+	    = GET_MODE_SIZE (riscv_vector::get_subpart_mode (mode));
+	  if (known_lt (size, BYTES_PER_RISCV_VECTOR))
+	    return size;
+	}
+      return BYTES_PER_RISCV_VECTOR;
+    }
   return UNITS_PER_WORD;
 }
 
@@ -7147,6 +7202,19 @@ riscv_zero_call_used_regs (HARD_REG_SET need_zeroed_hardregs)
 							& ~zeroed_hardregs);
 }
 
+/* Implement target hook TARGET_ARRAY_MODE.  */
+
+static opt_machine_mode
+riscv_array_mode (machine_mode mode, unsigned HOST_WIDE_INT nelems)
+{
+  machine_mode vmode;
+  if (TARGET_VECTOR
+      && riscv_vector::get_tuple_mode (mode, nelems).exists (&vmode))
+    return vmode;
+
+  return opt_machine_mode ();
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -7401,6 +7469,9 @@ riscv_zero_call_used_regs (HARD_REG_SET need_zeroed_hardregs)
 #undef TARGET_ZERO_CALL_USED_REGS
 #define TARGET_ZERO_CALL_USED_REGS riscv_zero_call_used_regs
 
+#undef TARGET_ARRAY_MODE
+#define TARGET_ARRAY_MODE riscv_array_mode
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 1fb29da8a0b..e0d1a3315e0 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -169,7 +169,32 @@
   VNx1SI,VNx2SI,VNx4SI,VNx8SI,VNx16SI,VNx32SI,
   VNx1DI,VNx2DI,VNx4DI,VNx8DI,VNx16DI,
   VNx1SF,VNx2SF,VNx4SF,VNx8SF,VNx16SF,VNx32SF,
-  VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF"
+  VNx1DF,VNx2DF,VNx4DF,VNx8DF,VNx16DF,
+  VNx2x64QI,VNx2x32QI,VNx3x32QI,VNx4x32QI,
+  VNx2x16QI,VNx3x16QI,VNx4x16QI,VNx5x16QI,VNx6x16QI,VNx7x16QI,VNx8x16QI,
+  VNx2x8QI,VNx3x8QI,VNx4x8QI,VNx5x8QI,VNx6x8QI,VNx7x8QI,VNx8x8QI,
+  VNx2x4QI,VNx3x4QI,VNx4x4QI,VNx5x4QI,VNx6x4QI,VNx7x4QI,VNx8x4QI,
+  VNx2x2QI,VNx3x2QI,VNx4x2QI,VNx5x2QI,VNx6x2QI,VNx7x2QI,VNx8x2QI,
+  VNx2x1QI,VNx3x1QI,VNx4x1QI,VNx5x1QI,VNx6x1QI,VNx7x1QI,VNx8x1QI,
+  VNx2x32HI,VNx2x16HI,VNx3x16HI,VNx4x16HI,
+  VNx2x8HI,VNx3x8HI,VNx4x8HI,VNx5x8HI,VNx6x8HI,VNx7x8HI,VNx8x8HI,
+  VNx2x4HI,VNx3x4HI,VNx4x4HI,VNx5x4HI,VNx6x4HI,VNx7x4HI,VNx8x4HI,
+  VNx2x2HI,VNx3x2HI,VNx4x2HI,VNx5x2HI,VNx6x2HI,VNx7x2HI,VNx8x2HI,
+  VNx2x1HI,VNx3x1HI,VNx4x1HI,VNx5x1HI,VNx6x1HI,VNx7x1HI,VNx8x1HI,
+  VNx2x16SI,VNx2x8SI,VNx3x8SI,VNx4x8SI,
+  VNx2x4SI,VNx3x4SI,VNx4x4SI,VNx5x4SI,VNx6x4SI,VNx7x4SI,VNx8x4SI,
+  VNx2x2SI,VNx3x2SI,VNx4x2SI,VNx5x2SI,VNx6x2SI,VNx7x2SI,VNx8x2SI,
+  VNx2x1SI,VNx3x1SI,VNx4x1SI,VNx5x1SI,VNx6x1SI,VNx7x1SI,VNx8x1SI,
+  VNx2x16SF,VNx2x8SF,VNx3x8SF,VNx4x8SF,
+  VNx2x4SF,VNx3x4SF,VNx4x4SF,VNx5x4SF,VNx6x4SF,VNx7x4SF,VNx8x4SF,
+  VNx2x2SF,VNx3x2SF,VNx4x2SF,VNx5x2SF,VNx6x2SF,VNx7x2SF,VNx8x2SF,
+  VNx2x1SF,VNx3x1SF,VNx4x1SF,VNx5x1SF,VNx6x1SF,VNx7x1SF,VNx8x1SF,
+  VNx2x8DI,VNx2x4DI,VNx3x4DI,VNx4x4DI,
+  VNx2x2DI,VNx3x2DI,VNx4x2DI,VNx5x2DI,VNx6x2DI,VNx7x2DI,VNx8x2DI,
+  VNx2x1DI,VNx3x1DI,VNx4x1DI,VNx5x1DI,VNx6x1DI,VNx7x1DI,VNx8x1DI,
+  VNx2x8DF,VNx2x4DF,VNx3x4DF,VNx4x4DF,
+  VNx2x2DF,VNx3x2DF,VNx4x2DF,VNx5x2DF,VNx6x2DF,VNx7x2DF,VNx8x2DF,
+  VNx2x1DF,VNx3x1DF,VNx4x1DF,VNx5x1DF,VNx6x1DF,VNx7x1DF,VNx8x1DF"
   (const_string "unknown"))
 
 ;; True if the main data type is twice the size of a word.
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 3c6575208be..b42afb0ff1a 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -487,6 +487,166 @@
   (VNx16DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
 ])
 
+(define_mode_iterator VT [
+  (VNx2x64QI "TARGET_MIN_VLEN >= 128")
+  (VNx2x32QI "TARGET_MIN_VLEN >= 64")
+  (VNx3x32QI "TARGET_MIN_VLEN >= 128")
+  (VNx4x32QI "TARGET_MIN_VLEN >= 128")
+  VNx2x16QI
+  (VNx3x16QI "TARGET_MIN_VLEN >= 64")
+  (VNx4x16QI "TARGET_MIN_VLEN >= 64")
+  (VNx5x16QI "TARGET_MIN_VLEN >= 128")
+  (VNx6x16QI "TARGET_MIN_VLEN >= 128")
+  (VNx7x16QI "TARGET_MIN_VLEN >= 128")
+  (VNx8x16QI "TARGET_MIN_VLEN >= 128")
+  VNx2x8QI
+  VNx3x8QI
+  VNx4x8QI
+  (VNx5x8QI "TARGET_MIN_VLEN >= 64")
+  (VNx6x8QI "TARGET_MIN_VLEN >= 64")
+  (VNx7x8QI "TARGET_MIN_VLEN >= 64")
+  (VNx8x8QI "TARGET_MIN_VLEN >= 64")
+  VNx2x4QI
+  VNx3x4QI
+  VNx4x4QI
+  VNx5x4QI
+  VNx6x4QI
+  VNx7x4QI
+  VNx8x4QI
+  VNx2x2QI
+  VNx3x2QI
+  VNx4x2QI
+  VNx5x2QI
+  VNx6x2QI
+  VNx7x2QI
+  VNx8x2QI
+  (VNx2x1QI "TARGET_MIN_VLEN < 128")
+  (VNx3x1QI "TARGET_MIN_VLEN < 128")
+  (VNx4x1QI "TARGET_MIN_VLEN < 128")
+  (VNx5x1QI "TARGET_MIN_VLEN < 128")
+  (VNx6x1QI "TARGET_MIN_VLEN < 128")
+  (VNx7x1QI "TARGET_MIN_VLEN < 128")
+  (VNx8x1QI "TARGET_MIN_VLEN < 128")
+  (VNx2x32HI "TARGET_MIN_VLEN >= 128")
+  (VNx2x16HI "TARGET_MIN_VLEN >= 64")
+  (VNx3x16HI "TARGET_MIN_VLEN >= 128")
+  (VNx4x16HI "TARGET_MIN_VLEN >= 128")
+  VNx2x8HI
+  (VNx3x8HI "TARGET_MIN_VLEN >= 64")
+  (VNx4x8HI "TARGET_MIN_VLEN >= 64")
+  (VNx5x8HI "TARGET_MIN_VLEN >= 128")
+  (VNx6x8HI "TARGET_MIN_VLEN >= 128")
+  (VNx7x8HI "TARGET_MIN_VLEN >= 128")
+  (VNx8x8HI "TARGET_MIN_VLEN >= 128")
+  VNx2x4HI
+  VNx3x4HI
+  VNx4x4HI
+  (VNx5x4HI "TARGET_MIN_VLEN >= 64")
+  (VNx6x4HI "TARGET_MIN_VLEN >= 64")
+  (VNx7x4HI "TARGET_MIN_VLEN >= 64")
+  (VNx8x4HI "TARGET_MIN_VLEN >= 64")
+  VNx2x2HI
+  VNx3x2HI
+  VNx4x2HI
+  VNx5x2HI
+  VNx6x2HI
+  VNx7x2HI
+  VNx8x2HI
+  (VNx2x1HI "TARGET_MIN_VLEN < 128")
+  (VNx3x1HI "TARGET_MIN_VLEN < 128")
+  (VNx4x1HI "TARGET_MIN_VLEN < 128")
+  (VNx5x1HI "TARGET_MIN_VLEN < 128")
+  (VNx6x1HI "TARGET_MIN_VLEN < 128")
+  (VNx7x1HI "TARGET_MIN_VLEN < 128")
+  (VNx8x1HI "TARGET_MIN_VLEN < 128")
+  (VNx2x16SI "TARGET_MIN_VLEN >= 128")
+  (VNx2x8SI "TARGET_MIN_VLEN >= 64")
+  (VNx3x8SI "TARGET_MIN_VLEN >= 128")
+  (VNx4x8SI "TARGET_MIN_VLEN >= 128")
+  VNx2x4SI
+  (VNx3x4SI "TARGET_MIN_VLEN >= 64")
+  (VNx4x4SI "TARGET_MIN_VLEN >= 64")
+  (VNx5x4SI "TARGET_MIN_VLEN >= 128")
+  (VNx6x4SI "TARGET_MIN_VLEN >= 128")
+  (VNx7x4SI "TARGET_MIN_VLEN >= 128")
+  (VNx8x4SI "TARGET_MIN_VLEN >= 128")
+  VNx2x2SI
+  VNx3x2SI
+  VNx4x2SI
+  (VNx5x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx6x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx7x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx8x2SI "TARGET_MIN_VLEN >= 64")
+  (VNx2x1SI "TARGET_MIN_VLEN < 128")
+  (VNx3x1SI "TARGET_MIN_VLEN < 128")
+  (VNx4x1SI "TARGET_MIN_VLEN < 128")
+  (VNx5x1SI "TARGET_MIN_VLEN < 128")
+  (VNx6x1SI "TARGET_MIN_VLEN < 128")
+  (VNx7x1SI "TARGET_MIN_VLEN < 128")
+  (VNx8x1SI "TARGET_MIN_VLEN < 128")
+  (VNx2x8DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x4DI "TARGET_VECTOR_ELEN_64")
+  (VNx3x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx4x4DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x2DI "TARGET_VECTOR_ELEN_64")
+  (VNx3x2DI "TARGET_VECTOR_ELEN_64")
+  (VNx4x2DI "TARGET_VECTOR_ELEN_64")
+  (VNx5x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx6x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx7x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx8x2DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx3x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx4x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx5x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx6x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx7x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx8x1DI "TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128")
+  (VNx2x16SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32")
+  (VNx2x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx3x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx4x8SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx2x4SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx3x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx4x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx5x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx6x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx7x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx8x4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 128")
+  (VNx2x2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx3x2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx4x2SF "TARGET_VECTOR_ELEN_FP_32")
+  (VNx5x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx6x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx7x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx8x2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN >= 64")
+  (VNx2x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx3x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx4x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx5x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx6x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx7x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx8x1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN < 128")
+  (VNx2x8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x4DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx3x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx4x4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx3x2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx4x2DF "TARGET_VECTOR_ELEN_FP_64")
+  (VNx5x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx6x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx7x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx8x2DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN >= 128")
+  (VNx2x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx3x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx4x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx5x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx6x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx7x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+  (VNx8x1DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_MIN_VLEN < 128")
+])
+
 (define_mode_attr VLMULX2 [
   (VNx1QI "VNx2QI") (VNx2QI "VNx4QI") (VNx4QI "VNx8QI") (VNx8QI "VNx16QI") (VNx16QI "VNx32QI") (VNx32QI "VNx64QI") (VNx64QI "VNx128QI")
   (VNx1HI "VNx2HI") (VNx2HI "VNx4HI") (VNx4HI "VNx8HI") (VNx8HI "VNx16HI") (VNx16HI "VNx32HI") (VNx32HI "VNx64HI")
@@ -563,6 +723,32 @@
   (VNx1DI "VNx1BI") (VNx2DI "VNx2BI") (VNx4DI "VNx4BI") (VNx8DI "VNx8BI") (VNx16DI "VNx16BI")
   (VNx1SF "VNx1BI") (VNx2SF "VNx2BI") (VNx4SF "VNx4BI") (VNx8SF "VNx8BI") (VNx16SF "VNx16BI") (VNx32SF "VNx32BI")
   (VNx1DF "VNx1BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI") (VNx16DF "VNx16BI")
+  (VNx2x64QI "VNx64BI") (VNx2x32QI "VNx32BI") (VNx3x32QI "VNx32BI") (VNx4x32QI "VNx32BI")
+  (VNx2x16QI "VNx16BI") (VNx3x16QI "VNx16BI") (VNx4x16QI "VNx16BI") (VNx5x16QI "VNx16BI") (VNx6x16QI "VNx16BI") (VNx7x16QI "VNx16BI") (VNx8x16QI "VNx16BI")
+  (VNx2x8QI "VNx8BI") (VNx3x8QI "VNx8BI") (VNx4x8QI "VNx8BI") (VNx5x8QI "VNx8BI") (VNx6x8QI "VNx8BI") (VNx7x8QI "VNx8BI") (VNx8x8QI "VNx8BI")
+  (VNx2x4QI "VNx4BI") (VNx3x4QI "VNx4BI") (VNx4x4QI "VNx4BI") (VNx5x4QI "VNx4BI") (VNx6x4QI "VNx4BI") (VNx7x4QI "VNx4BI") (VNx8x4QI "VNx4BI")
+  (VNx2x2QI "VNx2BI") (VNx3x2QI "VNx2BI") (VNx4x2QI "VNx2BI") (VNx5x2QI "VNx2BI") (VNx6x2QI "VNx2BI") (VNx7x2QI "VNx2BI") (VNx8x2QI "VNx2BI")
+  (VNx2x1QI "VNx1BI") (VNx3x1QI "VNx1BI") (VNx4x1QI "VNx1BI") (VNx5x1QI "VNx1BI") (VNx6x1QI "VNx1BI") (VNx7x1QI "VNx1BI") (VNx8x1QI "VNx1BI")
+  (VNx2x32HI "VNx32BI") (VNx2x16HI "VNx16BI") (VNx3x16HI "VNx16BI") (VNx4x16HI "VNx16BI")
+  (VNx2x8HI "VNx8BI") (VNx3x8HI "VNx8BI") (VNx4x8HI "VNx8BI") (VNx5x8HI "VNx8BI") (VNx6x8HI "VNx8BI") (VNx7x8HI "VNx8BI") (VNx8x8HI "VNx8BI")
+  (VNx2x4HI "VNx4BI") (VNx3x4HI "VNx4BI") (VNx4x4HI "VNx4BI") (VNx5x4HI "VNx4BI") (VNx6x4HI "VNx4BI") (VNx7x4HI "VNx4BI") (VNx8x4HI "VNx4BI")
+  (VNx2x2HI "VNx2BI") (VNx3x2HI "VNx2BI") (VNx4x2HI "VNx2BI") (VNx5x2HI "VNx2BI") (VNx6x2HI "VNx2BI") (VNx7x2HI "VNx2BI") (VNx8x2HI "VNx2BI")
+  (VNx2x1HI "VNx1BI") (VNx3x1HI "VNx1BI") (VNx4x1HI "VNx1BI") (VNx5x1HI "VNx1BI") (VNx6x1HI "VNx1BI") (VNx7x1HI "VNx1BI") (VNx8x1HI "VNx1BI")
+  (VNx2x16SI "VNx16BI") (VNx2x8SI "VNx8BI") (VNx3x8SI "VNx8BI") (VNx4x8SI "VNx8BI")
+  (VNx2x4SI "VNx4BI") (VNx3x4SI "VNx4BI") (VNx4x4SI "VNx4BI") (VNx5x4SI "VNx4BI") (VNx6x4SI "VNx4BI") (VNx7x4SI "VNx4BI") (VNx8x4SI "VNx4BI")
+  (VNx2x2SI "VNx2BI") (VNx3x2SI "VNx2BI") (VNx4x2SI "VNx2BI") (VNx5x2SI "VNx2BI") (VNx6x2SI "VNx2BI") (VNx7x2SI "VNx2BI") (VNx8x2SI "VNx2BI")
+  (VNx2x1SI "VNx1BI") (VNx3x1SI "VNx1BI") (VNx4x1SI "VNx1BI") (VNx5x1SI "VNx1BI") (VNx6x1SI "VNx1BI") (VNx7x1SI "VNx1BI") (VNx8x1SI "VNx1BI")
+  (VNx2x8DI "VNx8BI") (VNx2x4DI "VNx4BI") (VNx3x4DI "VNx4BI") (VNx4x4DI "VNx4BI")
+  (VNx2x2DI "VNx2BI") (VNx3x2DI "VNx2BI") (VNx4x2DI "VNx2BI") (VNx5x2DI "VNx2BI") (VNx6x2DI "VNx2BI") (VNx7x2DI "VNx2BI") (VNx8x2DI "VNx2BI")
+  (VNx2x1DI "VNx1BI") (VNx3x1DI "VNx1BI") (VNx4x1DI "VNx1BI") (VNx5x1DI "VNx1BI") (VNx6x1DI "VNx1BI") (VNx7x1DI "VNx1BI") (VNx8x1DI "VNx1BI")
+  (VNx2x16SF "VNx16BI") (VNx2x8SF "VNx8BI") (VNx3x8SF "VNx8BI") (VNx4x8SF "VNx8BI")
+  (VNx2x4SF "VNx4BI") (VNx3x4SF "VNx4BI") (VNx4x4SF "VNx4BI") (VNx5x4SF "VNx4BI") (VNx6x4SF "VNx4BI") (VNx7x4SF "VNx4BI") (VNx8x4SF "VNx4BI")
+  (VNx2x2SF "VNx2BI") (VNx3x2SF "VNx2BI") (VNx4x2SF "VNx2BI") (VNx5x2SF "VNx2BI") (VNx6x2SF "VNx2BI") (VNx7x2SF "VNx2BI") (VNx8x2SF "VNx2BI")
+  (VNx2x1SF "VNx1BI") (VNx3x1SF "VNx1BI") (VNx4x1SF "VNx1BI") (VNx5x1SF "VNx1BI") (VNx6x1SF "VNx1BI") (VNx7x1SF "VNx1BI") (VNx8x1SF "VNx1BI")
+  (VNx2x8DF "VNx8BI")
+  (VNx2x4DF "VNx4BI") (VNx3x4DF "VNx4BI") (VNx4x4DF "VNx4BI")
+  (VNx2x2DF "VNx2BI") (VNx3x2DF "VNx2BI") (VNx4x2DF "VNx2BI") (VNx5x2DF "VNx2BI") (VNx6x2DF "VNx2BI") (VNx7x2DF "VNx2BI") (VNx8x2DF "VNx2BI")
+  (VNx2x1DF "VNx1BI") (VNx3x1DF "VNx1BI") (VNx4x1DF "VNx1BI") (VNx5x1DF "VNx1BI") (VNx6x1DF "VNx1BI") (VNx7x1DF "VNx1BI") (VNx8x1DF "VNx1BI")
 ])
 
 (define_mode_attr vm [
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 0fda11ed67d..955c2971b60 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -708,6 +708,50 @@
   DONE;
 })
 
+;; Define tuple modes data movement.
+;; operands[2] is used to save the offset of each subpart.
+;; operands[3] is used to calculate the address for each subpart.
+;; operands[4] is VL of vsevli instruction.
+(define_expand "mov<mode>"
+  [(parallel [(set (match_operand:VT 0 "reg_or_mem_operand")
+                   (match_operand:VT 1 "general_operand"))
+     (clobber (match_dup 2))
+     (clobber (match_dup 3))
+     (clobber (match_dup 4))])]
+  "TARGET_VECTOR"
+  {
+    /* Need to force register if mem <- !reg.  */
+    if (MEM_P (operands[0]) && !REG_P (operands[1]))
+      operands[1] = force_reg (<MODE>mode, operands[1]);
+
+    if (GET_CODE (operands[1]) == CONST_VECTOR)
+      {
+        riscv_vector::expand_tuple_move (<VM>mode, operands);
+        DONE;
+      }
+
+    operands[2] = gen_rtx_SCRATCH (Pmode);
+    operands[3] = gen_rtx_SCRATCH (Pmode);
+    operands[4] = gen_rtx_SCRATCH (Pmode);
+  })
+
+(define_insn_and_split "*mov<VT:mode>_<P:mode>"
+  [(set (match_operand:VT 0 "reg_or_mem_operand" "=vr,vr, m")
+        (match_operand:VT 1 "reg_or_mem_operand" " vr, m,vr"))
+   (clobber (match_scratch:P 2 "=X,&r,&r"))
+   (clobber (match_scratch:P 3 "=X,&r,&r"))
+   (clobber (match_scratch:P 4 "=X,&r,&r"))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+  {
+    riscv_vector::expand_tuple_move (<VM>mode, operands);
+    DONE;
+  }
+  [(set_attr "type" "vmov,vlde,vste")
+   (set_attr "mode" "<VT:MODE>")])
+
 ;; -----------------------------------------------------------------
 ;; ---- Duplicate Operations
 ;; -----------------------------------------------------------------
-- 
2.36.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-05-03 10:41 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-18 12:09 [PATCH] RISC-V: Add tuple types support juzhe.zhong
2023-05-03 10:40 ` Kito Cheng
  -- strict thread matches above, loose matches on Subject: below --
2023-04-18 12:04 juzhe.zhong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).