public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support ***
@ 2022-05-31  8:49 juzhe.zhong
  2022-05-31  8:49 ` [PATCH 01/21] Add RVV modes and support scalable vector juzhe.zhong
                   ` (14 more replies)
  0 siblings, 15 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

Hello, since our RVV GCC has open source in RISC-V foundation repo:
https://github.com/riscv-collab/riscv-gcc/tree/riscv-gcc-rvv-next which
has been widely used by different users. During this time, I have polished
the codes and fixed bugs for different users. So I think it's ready to 
push them to GCC upstream.

This patch supports RVV intrinsics including C and C++ api which 
is documented here:https://github.com/riscv-non-isa/rvv-intrinsic-doc.

FP16 and segment RVV intrinsics are not included in this patch:

Because there is no FP16 (ZFH extension) support on GCC upstream, I didn't
including RVV with FP16 in this patch. I will support in the next patch.

Because segment support needs some modification on the codes that doesn't belong to
RISC-V port, I didn't including them in this patch. I will support in the next patch.

Auto-vectorization is not supported in this patch. It will be supported in the future.

The vsetvl support for each RVV intrinsic is gcc/config/riscv/riscv-insert-vsetvl.cc.
The pass implementation reference LLVM: https://github.com/llvm/llvm-project/commits/main/llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp.
They are almost totally same.

The SEW=64 handling on RV32 system:

Example:
vuint64m8_t 
foo (vuint64m8_t op1, int64_t r, size_t vl)
{
  return vadd_vx_u64m8(op1, r, vl);
}

ASM:
foo:
  vsetvli a5,zero,e32,m1,ta,mu
  li      a5,1431654400
  addi    a5,a5,1365
  vmv.v.x v0,a5
  vsetvli a5,zero,e32,m8,ta,mu
  vmv.v.x v24,a1
  vmerge.vxm      v24,v24,a0,v0
  vsetvli zero,a2,e64,m8,ta,mu
  vadd.vv v8,v8,v24
  ret

The 'v0' is the mask with all bits as 0b010101010101010.... duplicate '01' mask.
We first move the high-32bit into the vector. And second merge the low-32bit into
vector with mask (0b0101010101010101010101010101......). Then we can get the duplicated
vector. 

LLVM uses two 32bit scalar store and then use vlse instruction to get the duplicated
vector. This way need to access memory and I have tried this method in GCC, the codegen
is bad. So I implement the handling SEW=64 on RV32 system as descibed before. This can
be optimized in the future.

*** BLURB HERE ***

zhongjuzhe (21):
  Add RVV modes and support scalable vector
  Add RVV intrinsic framework
  Add RVV datatypes
  Add RVV intrinsic enable #pragma riscv intrinsic "vector"   and
    introduce RVV header "riscv_vector.h"
  Add RVV configuration intrinsic
  Add insert-vsetvl pass
  Add register spilling support
  Add poly manipulation
  Add misc function intrinsic support
  Add unit-stride load store intrinsics
  Add calling function support
  Add set get intrinsic support
  Adjust scalable frame and full testcases
  Add load and store intrinsics
  Add integer intrinsics
  Add integer intrinsic C api testcases
  Add integer intrinsic C++ api testcases
  Add rest intrinsic support
  Add rest C api intrinsic testcases
  Add rest C++ intrinsic api testcases
  Add SEW=64 on RV32 system testcases

 gcc/config.gcc                                |     6 +-
 gcc/config/riscv/constraints.md               |    47 +
 gcc/config/riscv/md-parser                    |   205 +
 gcc/config/riscv/predicates.md                |    98 +-
 gcc/config/riscv/riscv-builtins.cc            |    88 +-
 gcc/config/riscv/riscv-c.cc                   |    65 +
 gcc/config/riscv/riscv-insert-vsetvl.cc       |  2312 +
 gcc/config/riscv/riscv-modes.def              |   177 +
 gcc/config/riscv/riscv-opts.h                 |    39 +
 gcc/config/riscv/riscv-passes.def             |     2 +
 gcc/config/riscv/riscv-protos.h               |    81 +-
 gcc/config/riscv/riscv-sr.cc                  |     2 +-
 .../riscv/riscv-vector-builtins-functions.cc  |  4788 ++
 .../riscv/riscv-vector-builtins-functions.def |   391 +
 .../riscv/riscv-vector-builtins-functions.h   |  2905 +
 .../riscv/riscv-vector-builtins-iterators.def |   819 +
 gcc/config/riscv/riscv-vector-builtins.cc     |   746 +
 gcc/config/riscv/riscv-vector-builtins.def    |    37 +
 gcc/config/riscv/riscv-vector-builtins.h      |    62 +
 gcc/config/riscv/riscv-vector.cc              |  1197 +
 gcc/config/riscv/riscv-vector.h               |    26 +
 gcc/config/riscv/riscv.cc                     |  1001 +-
 gcc/config/riscv/riscv.h                      |   106 +-
 gcc/config/riscv/riscv.md                     |   170 +-
 gcc/config/riscv/riscv.opt                    |    32 +
 gcc/config/riscv/riscv_vector.h               |    41 +
 gcc/config/riscv/t-riscv                      |    44 +
 gcc/config/riscv/vector-iterators.md          |  1067 +
 gcc/config/riscv/vector.md                    |  6032 ++
 .../g++.target/riscv/rvv/misc_func.C          |  2597 +
 .../g++.target/riscv/rvv/rvv-intrinsic.exp    |    39 +
 gcc/testsuite/g++.target/riscv/rvv/set-get.C  |   730 +
 gcc/testsuite/g++.target/riscv/rvv/vaadd.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vaaddu.C   |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vadc.C     |  4229 +
 gcc/testsuite/g++.target/riscv/rvv/vadd.C     | 10565 +++
 gcc/testsuite/g++.target/riscv/rvv/vand.C     | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vasub.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vasubu.C   |  5637 ++
 .../g++.target/riscv/rvv/vcompress.C          |  2550 +
 gcc/testsuite/g++.target/riscv/rvv/vcpop.C    |   229 +
 gcc/testsuite/g++.target/riscv/rvv/vdiv.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vdivu.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vfabs.C    |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfadd.C    |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfclass.C  |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vfcvt.C    |  6918 ++
 gcc/testsuite/g++.target/riscv/rvv/vfdiv.C    |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfirst.C   |   229 +
 gcc/testsuite/g++.target/riscv/rvv/vfmacc.C   |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmadd.C   |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmax.C    |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmerge.C  |   438 +
 gcc/testsuite/g++.target/riscv/rvv/vfmin.C    |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmsac.C   |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmsub.C   |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmul.C    |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfmv.C     |   726 +
 gcc/testsuite/g++.target/riscv/rvv/vfncvt.C   |  6662 ++
 gcc/testsuite/g++.target/riscv/rvv/vfneg.C    |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfnmacc.C  |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfnmadd.C  |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfnmsac.C  |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfnmsub.C  |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfrdiv.C   |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfrec7.C   |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfredmax.C |   870 +
 gcc/testsuite/g++.target/riscv/rvv/vfredmin.C |   870 +
 .../g++.target/riscv/rvv/vfredosum.C          |   870 +
 .../g++.target/riscv/rvv/vfredusum.C          |   870 +
 gcc/testsuite/g++.target/riscv/rvv/vfrsqrt7.C |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfrsub.C   |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfsgnj.C   |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfsgnjn.C  |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfsgnjx.C  |  2310 +
 .../g++.target/riscv/rvv/vfslide1down.C       |  1158 +
 .../g++.target/riscv/rvv/vfslide1up.C         |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfsqrt.C   |  1158 +
 gcc/testsuite/g++.target/riscv/rvv/vfsub.C    |  2310 +
 gcc/testsuite/g++.target/riscv/rvv/vfwadd.C   |  2054 +
 gcc/testsuite/g++.target/riscv/rvv/vfwcvt.C   |  4870 ++
 gcc/testsuite/g++.target/riscv/rvv/vfwmacc.C  |  1030 +
 gcc/testsuite/g++.target/riscv/rvv/vfwmsac.C  |  1030 +
 gcc/testsuite/g++.target/riscv/rvv/vfwmul.C   |  1030 +
 gcc/testsuite/g++.target/riscv/rvv/vfwnmacc.C |  1030 +
 gcc/testsuite/g++.target/riscv/rvv/vfwnmsac.C |  1030 +
 .../g++.target/riscv/rvv/vfwredosum.C         |   486 +
 .../g++.target/riscv/rvv/vfwredusum.C         |   486 +
 gcc/testsuite/g++.target/riscv/rvv/vfwsub.C   |  2054 +
 gcc/testsuite/g++.target/riscv/rvv/vid.C      |  2821 +
 gcc/testsuite/g++.target/riscv/rvv/viota.C    |  2821 +
 gcc/testsuite/g++.target/riscv/rvv/vlex_1.C   |  6792 ++
 gcc/testsuite/g++.target/riscv/rvv/vlexff_1.C |  6792 ++
 .../g++.target/riscv/rvv/vloxeix_1.C          |  8663 +++
 .../g++.target/riscv/rvv/vloxeix_2.C          |  7191 ++
 .../g++.target/riscv/rvv/vloxeix_3.C          |  6120 ++
 .../g++.target/riscv/rvv/vloxeix_4.C          |  2503 +
 gcc/testsuite/g++.target/riscv/rvv/vlsex_1.C  |  6792 ++
 .../g++.target/riscv/rvv/vluxeix_1.C          |  8663 +++
 .../g++.target/riscv/rvv/vluxeix_2.C          |  7191 ++
 .../g++.target/riscv/rvv/vluxeix_3.C          |  6120 ++
 .../g++.target/riscv/rvv/vluxeix_4.C          |  2503 +
 gcc/testsuite/g++.target/riscv/rvv/vmacc.C    | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vmadc.C    |  2821 +
 gcc/testsuite/g++.target/riscv/rvv/vmadd.C    | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vmand.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmandn.C   |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmax.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vmaxu.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vmclr.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmerge.C   |  4662 ++
 gcc/testsuite/g++.target/riscv/rvv/vmfeq.C    |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vmfge.C    |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vmfgt.C    |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vmfle.C    |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vmflt.C    |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vmfne.C    |  1086 +
 gcc/testsuite/g++.target/riscv/rvv/vmin.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vminu.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vmmv.C     |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmnand.C   |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmnor.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmnot.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmor.C     |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmorn.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmsbc.C    |  2821 +
 gcc/testsuite/g++.target/riscv/rvv/vmsbf.C    |   425 +
 gcc/testsuite/g++.target/riscv/rvv/vmseq.C    |  5285 ++
 gcc/testsuite/g++.target/riscv/rvv/vmset.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmsge.C    |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsgeu.C   |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsgt.C    |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsgtu.C   |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsif.C    |   425 +
 gcc/testsuite/g++.target/riscv/rvv/vmsle.C    |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsleu.C   |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmslt.C    |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsltu.C   |  2645 +
 gcc/testsuite/g++.target/riscv/rvv/vmsne.C    |  5285 ++
 gcc/testsuite/g++.target/riscv/rvv/vmsof.C    |   425 +
 gcc/testsuite/g++.target/riscv/rvv/vmul.C     | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vmulh.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vmulhsu.C  |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vmulhu.C   |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vmv.C      |  5222 ++
 gcc/testsuite/g++.target/riscv/rvv/vmv1r.C    |   125 +
 gcc/testsuite/g++.target/riscv/rvv/vmv2r.C    |   125 +
 gcc/testsuite/g++.target/riscv/rvv/vmv4r.C    |   125 +
 gcc/testsuite/g++.target/riscv/rvv/vmv8r.C    |   125 +
 gcc/testsuite/g++.target/riscv/rvv/vmxnor.C   |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vmxor.C    |   117 +
 gcc/testsuite/g++.target/riscv/rvv/vnclip.C   |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vnclipu.C  |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vncvt.C    |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vneg.C     |  2821 +
 gcc/testsuite/g++.target/riscv/rvv/vnmsac.C   | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vnmsub.C   | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vnot.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vnsra.C    |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vnsrl.C    |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vor.C      | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vredand.C  |  4229 +
 gcc/testsuite/g++.target/riscv/rvv/vredmax.C  |  2117 +
 gcc/testsuite/g++.target/riscv/rvv/vredmaxu.C |  2117 +
 gcc/testsuite/g++.target/riscv/rvv/vredmin.C  |  2117 +
 gcc/testsuite/g++.target/riscv/rvv/vredminu.C |  2117 +
 gcc/testsuite/g++.target/riscv/rvv/vredor.C   |  4229 +
 gcc/testsuite/g++.target/riscv/rvv/vredsum.C  |  4229 +
 gcc/testsuite/g++.target/riscv/rvv/vredxor.C  |  4229 +
 gcc/testsuite/g++.target/riscv/rvv/vrem.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vremu.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vrgather.C | 13574 ++++
 .../g++.target/riscv/rvv/vrgatherei16.C       |  6534 ++
 gcc/testsuite/g++.target/riscv/rvv/vrsub.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vsadd.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vsaddu.C   |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vsbc.C     |  4229 +
 gcc/testsuite/g++.target/riscv/rvv/vsetvl.C   |     4 +
 gcc/testsuite/g++.target/riscv/rvv/vsex.C     |  1704 +
 gcc/testsuite/g++.target/riscv/rvv/vsext.C    |  3589 +
 .../g++.target/riscv/rvv/vslide1down.C        |  5637 ++
 .../g++.target/riscv/rvv/vslide1up.C          |  5637 ++
 .../g++.target/riscv/rvv/vslidedown.C         |  6367 ++
 gcc/testsuite/g++.target/riscv/rvv/vslideup.C |  6367 ++
 gcc/testsuite/g++.target/riscv/rvv/vsll.C     | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vsmul.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vsoxeix.C  |  6120 ++
 gcc/testsuite/g++.target/riscv/rvv/vsra.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vsrl.C     |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vssex.C    |  1704 +
 gcc/testsuite/g++.target/riscv/rvv/vssra.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vssrl.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vssub.C    |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vssubu.C   |  5637 ++
 gcc/testsuite/g++.target/riscv/rvv/vsub.C     | 10565 +++
 gcc/testsuite/g++.target/riscv/rvv/vsuxeix.C  |  6120 ++
 gcc/testsuite/g++.target/riscv/rvv/vwadd.C    |  7685 ++
 gcc/testsuite/g++.target/riscv/rvv/vwaddu.C   |  7685 ++
 gcc/testsuite/g++.target/riscv/rvv/vwcvt.C    |  1925 +
 gcc/testsuite/g++.target/riscv/rvv/vwcvtu.C   |  1925 +
 gcc/testsuite/g++.target/riscv/rvv/vwmacc.C   |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vwmaccsu.C |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vwmaccu.C  |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vwmaccus.C |  1925 +
 gcc/testsuite/g++.target/riscv/rvv/vwmul.C    |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vwmulsu.C  |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vwmulu.C   |  3845 +
 gcc/testsuite/g++.target/riscv/rvv/vwredsum.C |  1733 +
 .../g++.target/riscv/rvv/vwredsumu.C          |  1733 +
 gcc/testsuite/g++.target/riscv/rvv/vwsub.C    |  7685 ++
 gcc/testsuite/g++.target/riscv/rvv/vwsubu.C   |  7685 ++
 gcc/testsuite/g++.target/riscv/rvv/vxor.C     | 11269 +++
 gcc/testsuite/g++.target/riscv/rvv/vzext.C    |  3589 +
 .../riscv/rvv/custom/calling_convention_1.c   |    46 +
 .../riscv/rvv/custom/rvv-custom.exp           |    47 +
 .../vx-sew64/rvv-e64-on-rv32.h                |    78 +
 .../vx-sew64/rvv-intrinsic-execute.exp        |    48 +
 .../vx-sew64/vadd.vx_i_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vadd.vx_i_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vadd.vx_i_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vadd.vx_i_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vadd.vx_u_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vadd.vx_u_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vadd.vx_u_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vadd.vx_u_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vand.vx_i_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vand.vx_i_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vand.vx_i_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vand.vx_i_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vand.vx_u_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vand.vx_u_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vand.vx_u_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vand.vx_u_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vmax.vx_i_sew64_on_rv32_m1.c     |    32 +
 .../vx-sew64/vmax.vx_i_sew64_on_rv32_m2.c     |    32 +
 .../vx-sew64/vmax.vx_i_sew64_on_rv32_m4.c     |    32 +
 .../vx-sew64/vmax.vx_i_sew64_on_rv32_m8.c     |    32 +
 .../vx-sew64/vmaxu.vx_u_sew64_on_rv32_m1.c    |    32 +
 .../vx-sew64/vmaxu.vx_u_sew64_on_rv32_m2.c    |    32 +
 .../vx-sew64/vmaxu.vx_u_sew64_on_rv32_m4.c    |    32 +
 .../vx-sew64/vmaxu.vx_u_sew64_on_rv32_m8.c    |    32 +
 .../vx-sew64/vmin.vx_i_sew64_on_rv32_m1.c     |    32 +
 .../vx-sew64/vmin.vx_i_sew64_on_rv32_m2.c     |    32 +
 .../vx-sew64/vmin.vx_i_sew64_on_rv32_m4.c     |    32 +
 .../vx-sew64/vmin.vx_i_sew64_on_rv32_m8.c     |    32 +
 .../vx-sew64/vminu.vx_u_sew64_on_rv32_m1.c    |    32 +
 .../vx-sew64/vminu.vx_u_sew64_on_rv32_m2.c    |    32 +
 .../vx-sew64/vminu.vx_u_sew64_on_rv32_m4.c    |    32 +
 .../vx-sew64/vminu.vx_u_sew64_on_rv32_m8.c    |    32 +
 .../vx-sew64/vmv.s.x_i_sew64_on_rv32_m1.c     |    32 +
 .../vx-sew64/vmv.s.x_i_sew64_on_rv32_m2.c     |    32 +
 .../vx-sew64/vmv.s.x_i_sew64_on_rv32_m4.c     |    32 +
 .../vx-sew64/vmv.s.x_i_sew64_on_rv32_m8.c     |    32 +
 .../vx-sew64/vmv.s.x_u_sew64_on_rv32_m1.c     |    32 +
 .../vx-sew64/vmv.s.x_u_sew64_on_rv32_m2.c     |    32 +
 .../vx-sew64/vmv.s.x_u_sew64_on_rv32_m4.c     |    32 +
 .../vx-sew64/vmv.s.x_u_sew64_on_rv32_m8.c     |    32 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m1.c     |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m1_e64.c |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m2.c     |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m2_e64.c |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m4.c     |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m4_e64.c |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m8.c     |    31 +
 .../vx-sew64/vmv.v.x_i_sew64_on_rv32_m8_e64.c |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m1.c     |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m1_e64.c |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m2.c     |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m2_e64.c |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m4.c     |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m4_e64.c |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m8.c     |    31 +
 .../vx-sew64/vmv.v.x_u_sew64_on_rv32_m8_e64.c |    31 +
 .../vx-sew64/vor.vx_i_sew64_on_rv32_m1.c      |    33 +
 .../vx-sew64/vor.vx_i_sew64_on_rv32_m2.c      |    33 +
 .../vx-sew64/vor.vx_i_sew64_on_rv32_m4.c      |    33 +
 .../vx-sew64/vor.vx_i_sew64_on_rv32_m8.c      |    33 +
 .../vx-sew64/vor.vx_u_sew64_on_rv32_m1.c      |    33 +
 .../vx-sew64/vor.vx_u_sew64_on_rv32_m2.c      |    33 +
 .../vx-sew64/vor.vx_u_sew64_on_rv32_m4.c      |    33 +
 .../vx-sew64/vor.vx_u_sew64_on_rv32_m8.c      |    33 +
 .../vx-sew64/vrsub.vx_i_sew64_on_rv32_m1.c    |    33 +
 .../vx-sew64/vrsub.vx_i_sew64_on_rv32_m2.c    |    33 +
 .../vx-sew64/vrsub.vx_i_sew64_on_rv32_m4.c    |    33 +
 .../vx-sew64/vrsub.vx_i_sew64_on_rv32_m8.c    |    33 +
 .../vx-sew64/vrsub.vx_u_sew64_on_rv32_m1.c    |    33 +
 .../vx-sew64/vrsub.vx_u_sew64_on_rv32_m2.c    |    33 +
 .../vx-sew64/vrsub.vx_u_sew64_on_rv32_m4.c    |    33 +
 .../vx-sew64/vrsub.vx_u_sew64_on_rv32_m8.c    |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m1.c       |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m1_e64.c   |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m2.c       |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m2_e64.c   |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m4.c       |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m4_e64.c   |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m8.c       |    33 +
 .../vslide1down.vx_i_sew64_on_rv32_m8_e64.c   |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m1.c       |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m1_e64.c   |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m2.c       |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m2_e64.c   |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m4.c       |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m4_e64.c   |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m8.c       |    33 +
 .../vslide1down.vx_u_sew64_on_rv32_m8_e64.c   |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m1.c         |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m1_e64.c     |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m2.c         |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m2_e64.c     |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m4.c         |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m4_e64.c     |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m8.c         |    33 +
 .../vslide1up.vx_i_sew64_on_rv32_m8_e64.c     |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m1.c         |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m1_e64.c     |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m2.c         |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m2_e64.c     |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m4.c         |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m4_e64.c     |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m8.c         |    33 +
 .../vslide1up.vx_u_sew64_on_rv32_m8_e64.c     |    33 +
 .../vx-sew64/vsub.vx_i_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vsub.vx_i_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vsub.vx_i_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vsub.vx_i_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vsub.vx_u_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vsub.vx_u_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vsub.vx_u_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vsub.vx_u_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vxor.vx_i_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vxor.vx_i_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vxor.vx_i_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vxor.vx_i_sew64_on_rv32_m8.c     |    33 +
 .../vx-sew64/vxor.vx_u_sew64_on_rv32_m1.c     |    33 +
 .../vx-sew64/vxor.vx_u_sew64_on_rv32_m2.c     |    33 +
 .../vx-sew64/vxor.vx_u_sew64_on_rv32_m4.c     |    33 +
 .../vx-sew64/vxor.vx_u_sew64_on_rv32_m8.c     |    33 +
 .../rvv-intrinsic-rv32-imm32bit.exp           |    44 +
 .../vaadd.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vaadd.vx_i_sew64_on_rv32.c                |   104 +
 .../vaadd.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vaadd.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vaaddu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vaaddu.vx_u_sew64_on_rv32.c               |   104 +
 .../vaaddu.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vaaddu.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vadc.vxm_i_sew64_on_rv32.c                |   104 +
 .../vadc.vxm_i_tx_sew64_on_rv32.c             |   200 +
 .../vadc.vxm_u_sew64_on_rv32.c                |   104 +
 .../vadc.vxm_u_tx_sew64_on_rv32.c             |   200 +
 .../vadd.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vadd.vx_i_sew64_on_rv32.c                 |   104 +
 .../vadd.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vadd.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vadd.vx_u_mask_sew64_on_rv32.c            |   104 +
 .../vadd.vx_u_sew64_on_rv32.c                 |   104 +
 .../vadd.vx_u_tx_sew64_on_rv32.c              |   200 +
 .../vadd.vx_u_txmx_sew64_on_rv32.c            |   392 +
 .../vand.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vand.vx_i_sew64_on_rv32.c                 |   104 +
 .../vand.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vand.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vand.vx_u_mask_sew64_on_rv32.c            |   104 +
 .../vand.vx_u_sew64_on_rv32.c                 |   104 +
 .../vand.vx_u_tx_sew64_on_rv32.c              |   200 +
 .../vand.vx_u_txmx_sew64_on_rv32.c            |   392 +
 .../vasub.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vasub.vx_i_sew64_on_rv32.c                |   104 +
 .../vasub.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vasub.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vasubu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vasubu.vx_u_sew64_on_rv32.c               |   104 +
 .../vasubu.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vasubu.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vdiv.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vdiv.vx_i_sew64_on_rv32.c                 |   104 +
 .../vdiv.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vdiv.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vdivu.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vdivu.vx_u_sew64_on_rv32.c                |   104 +
 .../vdivu.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vdivu.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vmacc.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmacc.vx_i_sew64_on_rv32.c                |   104 +
 .../vmacc.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vmacc.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vmacc.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vmacc.vx_u_sew64_on_rv32.c                |   104 +
 .../vmacc.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vmacc.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vmadc.vx_i_sew64_on_rv32.c                |   104 +
 .../vmadc.vx_u_sew64_on_rv32.c                |   104 +
 .../vmadc.vxm_i_sew64_on_rv32.c               |   104 +
 .../vmadc.vxm_u_sew64_on_rv32.c               |   104 +
 .../vmadd.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmadd.vx_i_sew64_on_rv32.c                |   104 +
 .../vmadd.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vmadd.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vmadd.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vmadd.vx_u_sew64_on_rv32.c                |   104 +
 .../vmadd.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vmadd.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vmax.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vmax.vx_i_sew64_on_rv32.c                 |   104 +
 .../vmax.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vmax.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vmaxu.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vmaxu.vx_u_sew64_on_rv32.c                |   104 +
 .../vmaxu.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vmaxu.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vmerge.vxm_i_sew64_on_rv32.c              |   104 +
 .../vmerge.vxm_i_tx_sew64_on_rv32.c           |   200 +
 .../vmerge.vxm_u_sew64_on_rv32.c              |   104 +
 .../vmerge.vxm_u_tx_sew64_on_rv32.c           |   200 +
 .../vmin.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vmin.vx_i_sew64_on_rv32.c                 |   104 +
 .../vmin.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vmin.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vminu.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vminu.vx_u_sew64_on_rv32.c                |   104 +
 .../vminu.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vminu.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vmsbc.vx_i_sew64_on_rv32.c                |   104 +
 .../vmsbc.vx_u_sew64_on_rv32.c                |   104 +
 .../vmsbc.vxm_i_sew64_on_rv32.c               |   104 +
 .../vmsbc.vxm_u_sew64_on_rv32.c               |   104 +
 .../vmseq.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmseq.vx_i_sew64_on_rv32.c                |   104 +
 .../vmseq.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vmseq.vx_u_sew64_on_rv32.c                |   104 +
 .../vmsge.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmsge.vx_i_sew64_on_rv32.c                |   104 +
 .../vmsgeu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vmsgeu.vx_u_sew64_on_rv32.c               |   104 +
 .../vmsgt.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmsgt.vx_i_sew64_on_rv32.c                |   104 +
 .../vmsgtu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vmsgtu.vx_u_sew64_on_rv32.c               |   104 +
 .../vmsle.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmsle.vx_i_sew64_on_rv32.c                |   104 +
 .../vmsleu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vmsleu.vx_u_sew64_on_rv32.c               |   104 +
 .../vmslt.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmslt.vx_i_sew64_on_rv32.c                |   104 +
 .../vmsltu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vmsltu.vx_u_sew64_on_rv32.c               |   104 +
 .../vmsne.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmsne.vx_i_sew64_on_rv32.c                |   104 +
 .../vmsne.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vmsne.vx_u_sew64_on_rv32.c                |   104 +
 .../vmul.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vmul.vx_i_sew64_on_rv32.c                 |   104 +
 .../vmul.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vmul.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vmul.vx_u_mask_sew64_on_rv32.c            |   104 +
 .../vmul.vx_u_sew64_on_rv32.c                 |   104 +
 .../vmul.vx_u_tx_sew64_on_rv32.c              |   200 +
 .../vmul.vx_u_txmx_sew64_on_rv32.c            |   392 +
 .../vmulh.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vmulh.vx_i_sew64_on_rv32.c                |   104 +
 .../vmulh.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vmulh.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vmulhsu.vx_iu_mask_sew64_on_rv32.c        |   104 +
 .../vmulhsu.vx_iu_sew64_on_rv32.c             |   104 +
 .../vmulhsu.vx_iu_tx_sew64_on_rv32.c          |   200 +
 .../vmulhsu.vx_iu_txmx_sew64_on_rv32.c        |   392 +
 .../vmulhu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vmulhu.vx_u_sew64_on_rv32.c               |   104 +
 .../vmulhu.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vmulhu.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vmv.s.x_i_sew64_on_rv32.c                 |    92 +
 .../vmv.s.x_i_tx_sew64_on_rv32.c              |   176 +
 .../vmv.s.x_u_sew64_on_rv32.c                 |    92 +
 .../vmv.s.x_u_tx_sew64_on_rv32.c              |   176 +
 .../vmv.v.x_i_sew64_on_rv32_e64.c             |    92 +
 .../vmv.v.x_i_tx_sew64_on_rv32_e64.c          |   176 +
 .../vmv.v.x_u_sew64_on_rv32_e64.c             |    92 +
 .../vmv.v.x_u_tx_sew64_on_rv32_e64.c          |   176 +
 .../vnmsac.vx_i_mask_sew64_on_rv32.c          |   104 +
 .../vnmsac.vx_i_sew64_on_rv32.c               |   104 +
 .../vnmsac.vx_i_tx_sew64_on_rv32.c            |   200 +
 .../vnmsac.vx_i_txmx_sew64_on_rv32.c          |   392 +
 .../vnmsac.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vnmsac.vx_u_sew64_on_rv32.c               |   104 +
 .../vnmsac.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vnmsac.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vnmsub.vx_i_mask_sew64_on_rv32.c          |   104 +
 .../vnmsub.vx_i_sew64_on_rv32.c               |   104 +
 .../vnmsub.vx_i_tx_sew64_on_rv32.c            |   200 +
 .../vnmsub.vx_i_txmx_sew64_on_rv32.c          |   392 +
 .../vnmsub.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vnmsub.vx_u_sew64_on_rv32.c               |   104 +
 .../vnmsub.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vnmsub.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vor.vx_i_mask_sew64_on_rv32.c             |   104 +
 .../vor.vx_i_sew64_on_rv32.c                  |   104 +
 .../vor.vx_i_tx_sew64_on_rv32.c               |   200 +
 .../vor.vx_i_txmx_sew64_on_rv32.c             |   392 +
 .../vor.vx_u_mask_sew64_on_rv32.c             |   104 +
 .../vor.vx_u_sew64_on_rv32.c                  |   104 +
 .../vor.vx_u_tx_sew64_on_rv32.c               |   200 +
 .../vor.vx_u_txmx_sew64_on_rv32.c             |   392 +
 .../vrem.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vrem.vx_i_sew64_on_rv32.c                 |   104 +
 .../vrem.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vrem.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vremu.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vremu.vx_u_sew64_on_rv32.c                |   104 +
 .../vremu.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vremu.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vrsub.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vrsub.vx_i_sew64_on_rv32.c                |   104 +
 .../vrsub.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vrsub.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vrsub.vx_u_mask_sew64_on_rv32.c           |   104 +
 .../vrsub.vx_u_sew64_on_rv32.c                |   104 +
 .../vrsub.vx_u_tx_sew64_on_rv32.c             |   200 +
 .../vrsub.vx_u_txmx_sew64_on_rv32.c           |   392 +
 .../vsadd.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vsadd.vx_i_sew64_on_rv32.c                |   104 +
 .../vsadd.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vsadd.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vsaddu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vsaddu.vx_u_sew64_on_rv32.c               |   104 +
 .../vsaddu.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vsaddu.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vsbc.vxm_i_sew64_on_rv32.c                |   104 +
 .../vsbc.vxm_i_tx_sew64_on_rv32.c             |   200 +
 .../vsbc.vxm_u_sew64_on_rv32.c                |   104 +
 .../vsbc.vxm_u_tx_sew64_on_rv32.c             |   200 +
 .../vslide1down.vx_i_mask_sew64_on_rv32_e64.c |   104 +
 .../vslide1down.vx_i_sew64_on_rv32_e64.c      |   104 +
 .../vslide1down.vx_i_tx_sew64_on_rv32_e64.c   |   200 +
 .../vslide1down.vx_i_txmx_sew64_on_rv32_e64.c |   392 +
 .../vslide1down.vx_u_mask_sew64_on_rv32_e64.c |   104 +
 .../vslide1down.vx_u_sew64_on_rv32_e64.c      |   104 +
 .../vslide1down.vx_u_tx_sew64_on_rv32_e64.c   |   200 +
 .../vslide1down.vx_u_txmx_sew64_on_rv32_e64.c |   392 +
 .../vslide1up.vx_i_mask_sew64_on_rv32_e64.c   |   104 +
 .../vslide1up.vx_i_sew64_on_rv32_e64.c        |   104 +
 .../vslide1up.vx_i_tx_sew64_on_rv32_e64.c     |   200 +
 .../vslide1up.vx_i_txmx_sew64_on_rv32_e64.c   |   392 +
 .../vslide1up.vx_u_mask_sew64_on_rv32_e64.c   |   104 +
 .../vslide1up.vx_u_sew64_on_rv32_e64.c        |   104 +
 .../vslide1up.vx_u_tx_sew64_on_rv32_e64.c     |   200 +
 .../vslide1up.vx_u_txmx_sew64_on_rv32_e64.c   |   392 +
 .../vsmul.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vsmul.vx_i_sew64_on_rv32.c                |   104 +
 .../vsmul.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vsmul.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vssub.vx_i_mask_sew64_on_rv32.c           |   104 +
 .../vssub.vx_i_sew64_on_rv32.c                |   104 +
 .../vssub.vx_i_tx_sew64_on_rv32.c             |   200 +
 .../vssub.vx_i_txmx_sew64_on_rv32.c           |   392 +
 .../vssubu.vx_u_mask_sew64_on_rv32.c          |   104 +
 .../vssubu.vx_u_sew64_on_rv32.c               |   104 +
 .../vssubu.vx_u_tx_sew64_on_rv32.c            |   200 +
 .../vssubu.vx_u_txmx_sew64_on_rv32.c          |   392 +
 .../vsub.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vsub.vx_i_sew64_on_rv32.c                 |   104 +
 .../vsub.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vsub.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vsub.vx_u_mask_sew64_on_rv32.c            |   104 +
 .../vsub.vx_u_sew64_on_rv32.c                 |   104 +
 .../vsub.vx_u_tx_sew64_on_rv32.c              |   200 +
 .../vsub.vx_u_txmx_sew64_on_rv32.c            |   392 +
 .../vxor.vx_i_mask_sew64_on_rv32.c            |   104 +
 .../vxor.vx_i_sew64_on_rv32.c                 |   104 +
 .../vxor.vx_i_tx_sew64_on_rv32.c              |   200 +
 .../vxor.vx_i_txmx_sew64_on_rv32.c            |   392 +
 .../vxor.vx_u_mask_sew64_on_rv32.c            |   104 +
 .../vxor.vx_u_sew64_on_rv32.c                 |   104 +
 .../vxor.vx_u_tx_sew64_on_rv32.c              |   200 +
 .../vxor.vx_u_txmx_sew64_on_rv32.c            |   392 +
 .../rvv/intrinsic-rv32/rvv-intrinsic-rv32.exp |    44 +
 .../vaadd.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vaadd.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vaadd.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vaadd.vx_i_sew64_on_rv32.c |    40 +
 .../vaadd.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vaadd.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vaadd.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vaadd.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vaadd.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vaadd.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vaadd.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vaadd.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vaaddu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vaaddu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vaaddu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vaaddu.vx_u_sew64_on_rv32.c               |    40 +
 .../vaaddu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vaaddu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vaaddu.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vaaddu.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vaaddu.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vaaddu.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vaaddu.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vaaddu.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../intrinsic-rv32/vadc.vxm_i_sew64_on_rv32.c |    40 +
 .../vadc.vxm_i_sew64_on_rv32_vl31.c           |    40 +
 .../vadc.vxm_i_sew64_on_rv32_vl32.c           |    40 +
 .../vadc.vxm_i_tx_sew64_on_rv32.c             |    72 +
 .../vadc.vxm_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vadc.vxm_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../intrinsic-rv32/vadc.vxm_u_sew64_on_rv32.c |    40 +
 .../vadc.vxm_u_sew64_on_rv32_vl31.c           |    40 +
 .../vadc.vxm_u_sew64_on_rv32_vl32.c           |    40 +
 .../vadc.vxm_u_tx_sew64_on_rv32.c             |    72 +
 .../vadc.vxm_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vadc.vxm_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vadd.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vadd.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vadd.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vadd.vx_i_sew64_on_rv32.c  |    40 +
 .../vadd.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vadd.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vadd.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vadd.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vadd.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vadd.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vadd.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vadd.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vadd.vx_u_mask_sew64_on_rv32.c            |    40 +
 .../vadd.vx_u_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vadd.vx_u_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vadd.vx_u_sew64_on_rv32.c  |    40 +
 .../vadd.vx_u_sew64_on_rv32_vl31.c            |    40 +
 .../vadd.vx_u_sew64_on_rv32_vl32.c            |    40 +
 .../vadd.vx_u_tx_sew64_on_rv32.c              |    72 +
 .../vadd.vx_u_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vadd.vx_u_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vadd.vx_u_txmx_sew64_on_rv32.c            |   136 +
 .../vadd.vx_u_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vadd.vx_u_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vand.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vand.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vand.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vand.vx_i_sew64_on_rv32.c  |    40 +
 .../vand.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vand.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vand.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vand.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vand.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vand.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vand.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vand.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vand.vx_u_mask_sew64_on_rv32.c            |    40 +
 .../vand.vx_u_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vand.vx_u_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vand.vx_u_sew64_on_rv32.c  |    40 +
 .../vand.vx_u_sew64_on_rv32_vl31.c            |    40 +
 .../vand.vx_u_sew64_on_rv32_vl32.c            |    40 +
 .../vand.vx_u_tx_sew64_on_rv32.c              |    72 +
 .../vand.vx_u_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vand.vx_u_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vand.vx_u_txmx_sew64_on_rv32.c            |   136 +
 .../vand.vx_u_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vand.vx_u_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vasub.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vasub.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vasub.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vasub.vx_i_sew64_on_rv32.c |    40 +
 .../vasub.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vasub.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vasub.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vasub.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vasub.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vasub.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vasub.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vasub.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vasubu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vasubu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vasubu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vasubu.vx_u_sew64_on_rv32.c               |    40 +
 .../vasubu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vasubu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vasubu.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vasubu.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vasubu.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vasubu.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vasubu.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vasubu.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vdiv.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vdiv.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vdiv.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vdiv.vx_i_sew64_on_rv32.c  |    40 +
 .../vdiv.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vdiv.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vdiv.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vdiv.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vdiv.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vdiv.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vdiv.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vdiv.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vdivu.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vdivu.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vdivu.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vdivu.vx_u_sew64_on_rv32.c |    40 +
 .../vdivu.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vdivu.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vdivu.vx_u_tx_sew64_on_rv32.c             |    72 +
 .../vdivu.vx_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vdivu.vx_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vdivu.vx_u_txmx_sew64_on_rv32.c           |   136 +
 .../vdivu.vx_u_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vdivu.vx_u_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vmacc.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmacc.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmacc.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmacc.vx_i_sew64_on_rv32.c |    40 +
 .../vmacc.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmacc.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmacc.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vmacc.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vmacc.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vmacc.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vmacc.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vmacc.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vmacc.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vmacc.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmacc.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmacc.vx_u_sew64_on_rv32.c |    40 +
 .../vmacc.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmacc.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmacc.vx_u_tx_sew64_on_rv32.c             |    72 +
 .../vmacc.vx_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vmacc.vx_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vmacc.vx_u_txmx_sew64_on_rv32.c           |   136 +
 .../vmacc.vx_u_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vmacc.vx_u_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../intrinsic-rv32/vmadc.vx_i_sew64_on_rv32.c |    40 +
 .../vmadc.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmadc.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../intrinsic-rv32/vmadc.vx_u_sew64_on_rv32.c |    40 +
 .../vmadc.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmadc.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmadc.vxm_i_sew64_on_rv32.c               |    40 +
 .../vmadc.vxm_i_sew64_on_rv32_vl31.c          |    40 +
 .../vmadc.vxm_i_sew64_on_rv32_vl32.c          |    40 +
 .../vmadc.vxm_u_sew64_on_rv32.c               |    40 +
 .../vmadc.vxm_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmadc.vxm_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmadd.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmadd.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmadd.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmadd.vx_i_sew64_on_rv32.c |    40 +
 .../vmadd.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmadd.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmadd.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vmadd.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vmadd.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vmadd.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vmadd.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vmadd.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vmadd.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vmadd.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmadd.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmadd.vx_u_sew64_on_rv32.c |    40 +
 .../vmadd.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmadd.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmadd.vx_u_tx_sew64_on_rv32.c             |    72 +
 .../vmadd.vx_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vmadd.vx_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vmadd.vx_u_txmx_sew64_on_rv32.c           |   136 +
 .../vmadd.vx_u_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vmadd.vx_u_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vmax.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vmax.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vmax.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vmax.vx_i_sew64_on_rv32.c  |    40 +
 .../vmax.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vmax.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vmax.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vmax.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vmax.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vmax.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vmax.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vmax.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vmaxu.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vmaxu.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmaxu.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmaxu.vx_u_sew64_on_rv32.c |    40 +
 .../vmaxu.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmaxu.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmaxu.vx_u_tx_sew64_on_rv32.c             |    72 +
 .../vmaxu.vx_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vmaxu.vx_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vmaxu.vx_u_txmx_sew64_on_rv32.c           |   136 +
 .../vmaxu.vx_u_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vmaxu.vx_u_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vmerge.vxm_i_sew64_on_rv32.c              |    36 +
 .../vmerge.vxm_i_sew64_on_rv32_vl31.c         |    36 +
 .../vmerge.vxm_i_sew64_on_rv32_vl32.c         |    36 +
 .../vmerge.vxm_i_tx_sew64_on_rv32.c           |    64 +
 .../vmerge.vxm_i_tx_sew64_on_rv32_vl31.c      |    64 +
 .../vmerge.vxm_i_tx_sew64_on_rv32_vl32.c      |    64 +
 .../vmerge.vxm_u_sew64_on_rv32.c              |    36 +
 .../vmerge.vxm_u_sew64_on_rv32_vl31.c         |    36 +
 .../vmerge.vxm_u_sew64_on_rv32_vl32.c         |    36 +
 .../vmerge.vxm_u_tx_sew64_on_rv32.c           |    64 +
 .../vmerge.vxm_u_tx_sew64_on_rv32_vl31.c      |    64 +
 .../vmerge.vxm_u_tx_sew64_on_rv32_vl32.c      |    64 +
 .../vmin.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vmin.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vmin.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vmin.vx_i_sew64_on_rv32.c  |    40 +
 .../vmin.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vmin.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vmin.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vmin.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vmin.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vmin.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vmin.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vmin.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vminu.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vminu.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vminu.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vminu.vx_u_sew64_on_rv32.c |    40 +
 .../vminu.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vminu.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vminu.vx_u_tx_sew64_on_rv32.c             |    72 +
 .../vminu.vx_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vminu.vx_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vminu.vx_u_txmx_sew64_on_rv32.c           |   136 +
 .../vminu.vx_u_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vminu.vx_u_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../intrinsic-rv32/vmsbc.vx_i_sew64_on_rv32.c |    40 +
 .../vmsbc.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmsbc.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../intrinsic-rv32/vmsbc.vx_u_sew64_on_rv32.c |    40 +
 .../vmsbc.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmsbc.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmsbc.vxm_i_sew64_on_rv32.c               |    40 +
 .../vmsbc.vxm_i_sew64_on_rv32_vl31.c          |    40 +
 .../vmsbc.vxm_i_sew64_on_rv32_vl32.c          |    40 +
 .../vmsbc.vxm_u_sew64_on_rv32.c               |    40 +
 .../vmsbc.vxm_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmsbc.vxm_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmseq.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmseq.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmseq.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmseq.vx_i_sew64_on_rv32.c |    40 +
 .../vmseq.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmseq.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmseq.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vmseq.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmseq.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmseq.vx_u_sew64_on_rv32.c |    40 +
 .../vmseq.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmseq.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmsge.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmsge.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmsge.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmsge.vx_i_sew64_on_rv32.c |    40 +
 .../vmsge.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmsge.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmsgeu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vmsgeu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vmsgeu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vmsgeu.vx_u_sew64_on_rv32.c               |    40 +
 .../vmsgeu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmsgeu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmsgt.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmsgt.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmsgt.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmsgt.vx_i_sew64_on_rv32.c |    40 +
 .../vmsgt.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmsgt.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmsgtu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vmsgtu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vmsgtu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vmsgtu.vx_u_sew64_on_rv32.c               |    40 +
 .../vmsgtu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmsgtu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmsle.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmsle.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmsle.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmsle.vx_i_sew64_on_rv32.c |    40 +
 .../vmsle.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmsle.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmsleu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vmsleu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vmsleu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vmsleu.vx_u_sew64_on_rv32.c               |    40 +
 .../vmsleu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmsleu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmslt.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmslt.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmslt.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmslt.vx_i_sew64_on_rv32.c |    40 +
 .../vmslt.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmslt.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmsltu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vmsltu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vmsltu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vmsltu.vx_u_sew64_on_rv32.c               |    40 +
 .../vmsltu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmsltu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmsne.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmsne.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmsne.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmsne.vx_i_sew64_on_rv32.c |    40 +
 .../vmsne.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmsne.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmsne.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vmsne.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmsne.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmsne.vx_u_sew64_on_rv32.c |    40 +
 .../vmsne.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vmsne.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vmul.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vmul.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vmul.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vmul.vx_i_sew64_on_rv32.c  |    40 +
 .../vmul.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vmul.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vmul.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vmul.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vmul.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vmul.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vmul.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vmul.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vmul.vx_u_mask_sew64_on_rv32.c            |    40 +
 .../vmul.vx_u_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vmul.vx_u_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vmul.vx_u_sew64_on_rv32.c  |    40 +
 .../vmul.vx_u_sew64_on_rv32_vl31.c            |    40 +
 .../vmul.vx_u_sew64_on_rv32_vl32.c            |    40 +
 .../vmul.vx_u_tx_sew64_on_rv32.c              |    72 +
 .../vmul.vx_u_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vmul.vx_u_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vmul.vx_u_txmx_sew64_on_rv32.c            |   136 +
 .../vmul.vx_u_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vmul.vx_u_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vmulh.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vmulh.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vmulh.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vmulh.vx_i_sew64_on_rv32.c |    40 +
 .../vmulh.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vmulh.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vmulh.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vmulh.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vmulh.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vmulh.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vmulh.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vmulh.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vmulhsu.vx_iu_mask_sew64_on_rv32.c        |    40 +
 .../vmulhsu.vx_iu_mask_sew64_on_rv32_vl31.c   |    40 +
 .../vmulhsu.vx_iu_mask_sew64_on_rv32_vl32.c   |    40 +
 .../vmulhsu.vx_iu_sew64_on_rv32.c             |    40 +
 .../vmulhsu.vx_iu_sew64_on_rv32_vl31.c        |    40 +
 .../vmulhsu.vx_iu_sew64_on_rv32_vl32.c        |    40 +
 .../vmulhsu.vx_iu_tx_sew64_on_rv32.c          |    72 +
 .../vmulhsu.vx_iu_tx_sew64_on_rv32_vl31.c     |    72 +
 .../vmulhsu.vx_iu_tx_sew64_on_rv32_vl32.c     |    72 +
 .../vmulhsu.vx_iu_txmx_sew64_on_rv32.c        |   136 +
 .../vmulhsu.vx_iu_txmx_sew64_on_rv32_vl31.c   |   136 +
 .../vmulhsu.vx_iu_txmx_sew64_on_rv32_vl32.c   |   136 +
 .../vmulhu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vmulhu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vmulhu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vmulhu.vx_u_sew64_on_rv32.c               |    40 +
 .../vmulhu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vmulhu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vmulhu.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vmulhu.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vmulhu.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vmulhu.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vmulhu.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vmulhu.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vnmsac.vx_i_mask_sew64_on_rv32.c          |    40 +
 .../vnmsac.vx_i_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vnmsac.vx_i_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vnmsac.vx_i_sew64_on_rv32.c               |    40 +
 .../vnmsac.vx_i_sew64_on_rv32_vl31.c          |    40 +
 .../vnmsac.vx_i_sew64_on_rv32_vl32.c          |    40 +
 .../vnmsac.vx_i_tx_sew64_on_rv32.c            |    72 +
 .../vnmsac.vx_i_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vnmsac.vx_i_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vnmsac.vx_i_txmx_sew64_on_rv32.c          |   136 +
 .../vnmsac.vx_i_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vnmsac.vx_i_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vnmsac.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vnmsac.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vnmsac.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vnmsac.vx_u_sew64_on_rv32.c               |    40 +
 .../vnmsac.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vnmsac.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vnmsac.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vnmsac.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vnmsac.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vnmsac.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vnmsac.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vnmsac.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vnmsub.vx_i_mask_sew64_on_rv32.c          |    40 +
 .../vnmsub.vx_i_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vnmsub.vx_i_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vnmsub.vx_i_sew64_on_rv32.c               |    40 +
 .../vnmsub.vx_i_sew64_on_rv32_vl31.c          |    40 +
 .../vnmsub.vx_i_sew64_on_rv32_vl32.c          |    40 +
 .../vnmsub.vx_i_tx_sew64_on_rv32.c            |    72 +
 .../vnmsub.vx_i_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vnmsub.vx_i_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vnmsub.vx_i_txmx_sew64_on_rv32.c          |   136 +
 .../vnmsub.vx_i_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vnmsub.vx_i_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vnmsub.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vnmsub.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vnmsub.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vnmsub.vx_u_sew64_on_rv32.c               |    40 +
 .../vnmsub.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vnmsub.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vnmsub.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vnmsub.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vnmsub.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vnmsub.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vnmsub.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vnmsub.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vor.vx_i_mask_sew64_on_rv32.c             |    40 +
 .../vor.vx_i_mask_sew64_on_rv32_vl31.c        |    40 +
 .../vor.vx_i_mask_sew64_on_rv32_vl32.c        |    40 +
 .../intrinsic-rv32/vor.vx_i_sew64_on_rv32.c   |    40 +
 .../vor.vx_i_sew64_on_rv32_vl31.c             |    40 +
 .../vor.vx_i_sew64_on_rv32_vl32.c             |    40 +
 .../vor.vx_i_tx_sew64_on_rv32.c               |    72 +
 .../vor.vx_i_tx_sew64_on_rv32_vl31.c          |    72 +
 .../vor.vx_i_tx_sew64_on_rv32_vl32.c          |    72 +
 .../vor.vx_i_txmx_sew64_on_rv32.c             |   136 +
 .../vor.vx_i_txmx_sew64_on_rv32_vl31.c        |   136 +
 .../vor.vx_i_txmx_sew64_on_rv32_vl32.c        |   136 +
 .../vor.vx_u_mask_sew64_on_rv32.c             |    40 +
 .../vor.vx_u_mask_sew64_on_rv32_vl31.c        |    40 +
 .../vor.vx_u_mask_sew64_on_rv32_vl32.c        |    40 +
 .../intrinsic-rv32/vor.vx_u_sew64_on_rv32.c   |    40 +
 .../vor.vx_u_sew64_on_rv32_vl31.c             |    40 +
 .../vor.vx_u_sew64_on_rv32_vl32.c             |    40 +
 .../vor.vx_u_tx_sew64_on_rv32.c               |    72 +
 .../vor.vx_u_tx_sew64_on_rv32_vl31.c          |    72 +
 .../vor.vx_u_tx_sew64_on_rv32_vl32.c          |    72 +
 .../vor.vx_u_txmx_sew64_on_rv32.c             |   136 +
 .../vor.vx_u_txmx_sew64_on_rv32_vl31.c        |   136 +
 .../vor.vx_u_txmx_sew64_on_rv32_vl32.c        |   136 +
 .../vrem.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vrem.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vrem.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vrem.vx_i_sew64_on_rv32.c  |    40 +
 .../vrem.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vrem.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vrem.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vrem.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vrem.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vrem.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vrem.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vrem.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vremu.vx_u_mask_sew64_on_rv32.c           |    40 +
 .../vremu.vx_u_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vremu.vx_u_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vremu.vx_u_sew64_on_rv32.c |    40 +
 .../vremu.vx_u_sew64_on_rv32_vl31.c           |    40 +
 .../vremu.vx_u_sew64_on_rv32_vl32.c           |    40 +
 .../vremu.vx_u_tx_sew64_on_rv32.c             |    72 +
 .../vremu.vx_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vremu.vx_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vremu.vx_u_txmx_sew64_on_rv32.c           |   136 +
 .../vremu.vx_u_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vremu.vx_u_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vrsub.vx_i_mask_pseudo_sew64_on_rv32.c    |    40 +
 ...rsub.vx_i_mask_pseudo_sew64_on_rv32_vl31.c |    40 +
 ...rsub.vx_i_mask_pseudo_sew64_on_rv32_vl32.c |    40 +
 .../vrsub.vx_i_pseudo_sew64_on_rv32.c         |    40 +
 .../vrsub.vx_i_pseudo_sew64_on_rv32_vl31.c    |    40 +
 .../vrsub.vx_i_pseudo_sew64_on_rv32_vl32.c    |    40 +
 .../vrsub.vx_i_tx_pseudo_sew64_on_rv32.c      |    72 +
 .../vrsub.vx_i_tx_pseudo_sew64_on_rv32_vl31.c |    72 +
 .../vrsub.vx_i_tx_pseudo_sew64_on_rv32_vl32.c |    72 +
 .../vrsub.vx_i_txmx_pseudo_sew64_on_rv32.c    |   136 +
 ...rsub.vx_i_txmx_pseudo_sew64_on_rv32_vl31.c |   136 +
 ...rsub.vx_i_txmx_pseudo_sew64_on_rv32_vl32.c |   136 +
 .../vrsub.vx_u_mask_pseudo_sew64_on_rv32.c    |    40 +
 ...rsub.vx_u_mask_pseudo_sew64_on_rv32_vl31.c |    40 +
 ...rsub.vx_u_mask_pseudo_sew64_on_rv32_vl32.c |    40 +
 .../vrsub.vx_u_pseudo_sew64_on_rv32.c         |    40 +
 .../vrsub.vx_u_pseudo_sew64_on_rv32_vl31.c    |    40 +
 .../vrsub.vx_u_pseudo_sew64_on_rv32_vl32.c    |    40 +
 .../vrsub.vx_u_tx_pseudo_sew64_on_rv32.c      |    72 +
 .../vrsub.vx_u_tx_pseudo_sew64_on_rv32_vl31.c |    72 +
 .../vrsub.vx_u_tx_pseudo_sew64_on_rv32_vl32.c |    72 +
 .../vrsub.vx_u_txmx_pseudo_sew64_on_rv32.c    |   136 +
 ...rsub.vx_u_txmx_pseudo_sew64_on_rv32_vl31.c |   136 +
 ...rsub.vx_u_txmx_pseudo_sew64_on_rv32_vl32.c |   136 +
 .../vsadd.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vsadd.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vsadd.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vsadd.vx_i_sew64_on_rv32.c |    40 +
 .../vsadd.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vsadd.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vsadd.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vsadd.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vsadd.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vsadd.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vsadd.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vsadd.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vsaddu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vsaddu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vsaddu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vsaddu.vx_u_sew64_on_rv32.c               |    40 +
 .../vsaddu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vsaddu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vsaddu.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vsaddu.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vsaddu.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vsaddu.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vsaddu.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vsaddu.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../intrinsic-rv32/vsbc.vxm_i_sew64_on_rv32.c |    40 +
 .../vsbc.vxm_i_sew64_on_rv32_vl31.c           |    40 +
 .../vsbc.vxm_i_sew64_on_rv32_vl32.c           |    40 +
 .../vsbc.vxm_i_tx_sew64_on_rv32.c             |    72 +
 .../vsbc.vxm_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vsbc.vxm_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../intrinsic-rv32/vsbc.vxm_u_sew64_on_rv32.c |    40 +
 .../vsbc.vxm_u_sew64_on_rv32_vl31.c           |    40 +
 .../vsbc.vxm_u_sew64_on_rv32_vl32.c           |    40 +
 .../vsbc.vxm_u_tx_sew64_on_rv32.c             |    72 +
 .../vsbc.vxm_u_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vsbc.vxm_u_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vsmul.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vsmul.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vsmul.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vsmul.vx_i_sew64_on_rv32.c |    40 +
 .../vsmul.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vsmul.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vsmul.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vsmul.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vsmul.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vsmul.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vsmul.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vsmul.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vssub.vx_i_mask_sew64_on_rv32.c           |    40 +
 .../vssub.vx_i_mask_sew64_on_rv32_vl31.c      |    40 +
 .../vssub.vx_i_mask_sew64_on_rv32_vl32.c      |    40 +
 .../intrinsic-rv32/vssub.vx_i_sew64_on_rv32.c |    40 +
 .../vssub.vx_i_sew64_on_rv32_vl31.c           |    40 +
 .../vssub.vx_i_sew64_on_rv32_vl32.c           |    40 +
 .../vssub.vx_i_tx_sew64_on_rv32.c             |    72 +
 .../vssub.vx_i_tx_sew64_on_rv32_vl31.c        |    72 +
 .../vssub.vx_i_tx_sew64_on_rv32_vl32.c        |    72 +
 .../vssub.vx_i_txmx_sew64_on_rv32.c           |   136 +
 .../vssub.vx_i_txmx_sew64_on_rv32_vl31.c      |   136 +
 .../vssub.vx_i_txmx_sew64_on_rv32_vl32.c      |   136 +
 .../vssubu.vx_u_mask_sew64_on_rv32.c          |    40 +
 .../vssubu.vx_u_mask_sew64_on_rv32_vl31.c     |    40 +
 .../vssubu.vx_u_mask_sew64_on_rv32_vl32.c     |    40 +
 .../vssubu.vx_u_sew64_on_rv32.c               |    40 +
 .../vssubu.vx_u_sew64_on_rv32_vl31.c          |    40 +
 .../vssubu.vx_u_sew64_on_rv32_vl32.c          |    40 +
 .../vssubu.vx_u_tx_sew64_on_rv32.c            |    72 +
 .../vssubu.vx_u_tx_sew64_on_rv32_vl31.c       |    72 +
 .../vssubu.vx_u_tx_sew64_on_rv32_vl32.c       |    72 +
 .../vssubu.vx_u_txmx_sew64_on_rv32.c          |   136 +
 .../vssubu.vx_u_txmx_sew64_on_rv32_vl31.c     |   136 +
 .../vssubu.vx_u_txmx_sew64_on_rv32_vl32.c     |   136 +
 .../vsub.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vsub.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vsub.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vsub.vx_i_sew64_on_rv32.c  |    40 +
 .../vsub.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vsub.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vsub.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vsub.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vsub.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vsub.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vsub.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vsub.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vsub.vx_u_mask_sew64_on_rv32.c            |    40 +
 .../vsub.vx_u_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vsub.vx_u_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vsub.vx_u_sew64_on_rv32.c  |    40 +
 .../vsub.vx_u_sew64_on_rv32_vl31.c            |    40 +
 .../vsub.vx_u_sew64_on_rv32_vl32.c            |    40 +
 .../vsub.vx_u_tx_sew64_on_rv32.c              |    72 +
 .../vsub.vx_u_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vsub.vx_u_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vsub.vx_u_txmx_sew64_on_rv32.c            |   136 +
 .../vsub.vx_u_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vsub.vx_u_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vxor.vx_i_mask_sew64_on_rv32.c            |    40 +
 .../vxor.vx_i_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vxor.vx_i_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vxor.vx_i_sew64_on_rv32.c  |    40 +
 .../vxor.vx_i_sew64_on_rv32_vl31.c            |    40 +
 .../vxor.vx_i_sew64_on_rv32_vl32.c            |    40 +
 .../vxor.vx_i_tx_sew64_on_rv32.c              |    72 +
 .../vxor.vx_i_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vxor.vx_i_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vxor.vx_i_txmx_sew64_on_rv32.c            |   136 +
 .../vxor.vx_i_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vxor.vx_i_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../vxor.vx_u_mask_sew64_on_rv32.c            |    40 +
 .../vxor.vx_u_mask_sew64_on_rv32_vl31.c       |    40 +
 .../vxor.vx_u_mask_sew64_on_rv32_vl32.c       |    40 +
 .../intrinsic-rv32/vxor.vx_u_sew64_on_rv32.c  |    40 +
 .../vxor.vx_u_sew64_on_rv32_vl31.c            |    40 +
 .../vxor.vx_u_sew64_on_rv32_vl32.c            |    40 +
 .../vxor.vx_u_tx_sew64_on_rv32.c              |    72 +
 .../vxor.vx_u_tx_sew64_on_rv32_vl31.c         |    72 +
 .../vxor.vx_u_tx_sew64_on_rv32_vl32.c         |    72 +
 .../vxor.vx_u_txmx_sew64_on_rv32.c            |   136 +
 .../vxor.vx_u_txmx_sew64_on_rv32_vl31.c       |   136 +
 .../vxor.vx_u_txmx_sew64_on_rv32_vl32.c       |   136 +
 .../riscv/rvv/intrinsic/mask_load_store.c     |    77 +
 .../riscv/rvv/intrinsic/mask_load_store_31.c  |    77 +
 .../riscv/rvv/intrinsic/mask_load_store_32.c  |    77 +
 .../riscv/rvv/intrinsic/misc_func.c           |  2921 +
 .../riscv/rvv/intrinsic/rvv-intrinsic.exp     |    47 +
 .../gcc.target/riscv/rvv/intrinsic/set-get.c  |   730 +
 .../gcc.target/riscv/rvv/intrinsic/vaadd.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vaaddu.c   | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vadc.c     | 23767 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vadd.c     | 59408 +++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vand.c     | 63631 ++++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vasub.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vasubu.c   | 15847 ++++
 .../riscv/rvv/intrinsic/vcompress.c           |  7161 ++
 .../gcc.target/riscv/rvv/intrinsic/vcpop.c    |   637 +
 .../gcc.target/riscv/rvv/intrinsic/vdiv.c     | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vdivu.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vfabs.c    |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfadd.c    |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfclass.c  |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vfcvt.c    | 19446 +++++
 .../gcc.target/riscv/rvv/intrinsic/vfdiv.c    |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfirst.c   |   637 +
 .../gcc.target/riscv/rvv/intrinsic/vfmacc.c   |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmadd.c   |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmax.c    |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmerge.c  |  1221 +
 .../gcc.target/riscv/rvv/intrinsic/vfmin.c    |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmsac.c   |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmsub.c   |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmul.c    |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfmv.c     |  2571 +
 .../gcc.target/riscv/rvv/intrinsic/vfncvt.c   | 18726 +++++
 .../gcc.target/riscv/rvv/intrinsic/vfneg.c    |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfnmacc.c  |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfnmadd.c  |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfnmsac.c  |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfnmsub.c  |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfrdiv.c   |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfrec7.c   |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfredmax.c |  2436 +
 .../gcc.target/riscv/rvv/intrinsic/vfredmin.c |  2436 +
 .../riscv/rvv/intrinsic/vfredosum.c           |  2436 +
 .../riscv/rvv/intrinsic/vfredusum.c           |  2436 +
 .../gcc.target/riscv/rvv/intrinsic/vfrsqrt7.c |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfrsub.c   |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfsgnj.c   |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfsgnjn.c  |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfsgnjx.c  |  6486 ++
 .../riscv/rvv/intrinsic/vfslide1down.c        |  3246 +
 .../riscv/rvv/intrinsic/vfslide1up.c          |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfsqrt.c   |  3246 +
 .../gcc.target/riscv/rvv/intrinsic/vfsub.c    |  6486 ++
 .../gcc.target/riscv/rvv/intrinsic/vfwadd.c   |  5766 ++
 .../gcc.target/riscv/rvv/intrinsic/vfwcvt.c   | 13686 ++++
 .../gcc.target/riscv/rvv/intrinsic/vfwmacc.c  |  2886 +
 .../gcc.target/riscv/rvv/intrinsic/vfwmsac.c  |  2886 +
 .../gcc.target/riscv/rvv/intrinsic/vfwmul.c   |  2886 +
 .../gcc.target/riscv/rvv/intrinsic/vfwnmacc.c |  2886 +
 .../gcc.target/riscv/rvv/intrinsic/vfwnmsac.c |  2886 +
 .../riscv/rvv/intrinsic/vfwredosum.c          |  1356 +
 .../riscv/rvv/intrinsic/vfwredusum.c          |  1356 +
 .../gcc.target/riscv/rvv/intrinsic/vfwsub.c   |  5766 ++
 .../gcc.target/riscv/rvv/intrinsic/vid.c      |  7927 ++
 .../gcc.target/riscv/rvv/intrinsic/viota.c    |  7927 ++
 .../gcc.target/riscv/rvv/intrinsic/vlex_1.c   | 17840 +++++
 .../gcc.target/riscv/rvv/intrinsic/vlex_2.c   |  1251 +
 .../gcc.target/riscv/rvv/intrinsic/vlexff_1.c | 17840 +++++
 .../gcc.target/riscv/rvv/intrinsic/vlexff_2.c |  1251 +
 .../riscv/rvv/intrinsic/vloxeix_1.c           | 16220 ++++
 .../riscv/rvv/intrinsic/vloxeix_2.c           | 18755 +++++
 .../riscv/rvv/intrinsic/vloxeix_3.c           | 18320 +++++
 .../riscv/rvv/intrinsic/vloxeix_4.c           | 15486 ++++
 .../gcc.target/riscv/rvv/intrinsic/vlsex_1.c  | 17840 +++++
 .../gcc.target/riscv/rvv/intrinsic/vlsex_2.c  |  1251 +
 .../riscv/rvv/intrinsic/vluxeix_1.c           | 16220 ++++
 .../riscv/rvv/intrinsic/vluxeix_2.c           | 18755 +++++
 .../riscv/rvv/intrinsic/vluxeix_3.c           | 18320 +++++
 .../riscv/rvv/intrinsic/vluxeix_4.c           | 15486 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmacc.c    | 31687 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vmadc.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmadd.c    | 31687 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vmand.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmandn.c   |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmax.c     | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmaxu.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmclr.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmerge.c   | 24982 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vmfeq.c    |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vmfge.c    |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vmfgt.c    |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vmfle.c    |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vmflt.c    |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vmfne.c    |  3031 +
 .../gcc.target/riscv/rvv/intrinsic/vmin.c     | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vminu.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmmv.c     |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmnand.c   |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmnor.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmnot.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmor.c     |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmorn.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmsbc.c    |  7927 ++
 .../gcc.target/riscv/rvv/intrinsic/vmsbf.c    |  1184 +
 .../gcc.target/riscv/rvv/intrinsic/vmseq.c    | 29576 +++++++
 .../gcc.target/riscv/rvv/intrinsic/vmset.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmsge.c    |  7400 ++
 .../gcc.target/riscv/rvv/intrinsic/vmsgeu.c   |  7400 ++
 .../gcc.target/riscv/rvv/intrinsic/vmsgt.c    | 14792 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmsgtu.c   | 14792 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmsif.c    |  1184 +
 .../gcc.target/riscv/rvv/intrinsic/vmsle.c    | 14792 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmsleu.c   | 14792 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmslt.c    | 14792 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmsltu.c   | 14792 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmsne.c    | 29576 +++++++
 .../gcc.target/riscv/rvv/intrinsic/vmsof.c    |  1184 +
 .../gcc.target/riscv/rvv/intrinsic/vmul.c     | 31687 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vmulh.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmulhsu.c  | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmulhu.c   | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vmv1r.c    |   119 +
 .../gcc.target/riscv/rvv/intrinsic/vmv2r.c    |   119 +
 .../gcc.target/riscv/rvv/intrinsic/vmv4r.c    |   119 +
 .../gcc.target/riscv/rvv/intrinsic/vmv8r.c    |   119 +
 .../gcc.target/riscv/rvv/intrinsic/vmxnor.c   |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vmxor.c    |   322 +
 .../gcc.target/riscv/rvv/intrinsic/vnclip.c   | 21697 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vnclipu.c  | 21697 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vncvt.c    | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vneg.c     |  7927 ++
 .../gcc.target/riscv/rvv/intrinsic/vnmsac.c   | 31687 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vnmsub.c   | 31687 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vnot.c     | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vnsra.c    | 21697 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vnsrl.c    | 21697 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vor.c      | 63631 ++++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vredand.c  | 11887 +++
 .../gcc.target/riscv/rvv/intrinsic/vredmax.c  |  5947 ++
 .../gcc.target/riscv/rvv/intrinsic/vredmaxu.c |  5947 ++
 .../gcc.target/riscv/rvv/intrinsic/vredmin.c  |  5947 ++
 .../gcc.target/riscv/rvv/intrinsic/vredminu.c |  5947 ++
 .../gcc.target/riscv/rvv/intrinsic/vredor.c   | 11887 +++
 .../gcc.target/riscv/rvv/intrinsic/vredsum.c  | 11887 +++
 .../gcc.target/riscv/rvv/intrinsic/vredxor.c  | 11887 +++
 .../gcc.target/riscv/rvv/intrinsic/vrem.c     | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vremu.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vrgather.c | 38166 +++++++++
 .../riscv/rvv/intrinsic/vrgatherei16.c        | 18366 +++++
 .../gcc.target/riscv/rvv/intrinsic/vrsub.c    | 47791 ++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsadd.c    | 31819 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsaddu.c   | 31819 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsbc.c     | 11887 +++
 .../gcc.target/riscv/rvv/intrinsic/vsetvl.c   |   733 +
 .../gcc.target/riscv/rvv/intrinsic/vsex.c     |  4776 ++
 .../gcc.target/riscv/rvv/intrinsic/vsext.c    | 10087 +++
 .../riscv/rvv/intrinsic/vslide1down.c         | 15847 ++++
 .../riscv/rvv/intrinsic/vslide1up.c           | 15847 ++++
 .../riscv/rvv/intrinsic/vslidedown.c          | 48176 ++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vslideup.c | 48176 ++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsll.c     | 63631 ++++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsmul.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vsoxeix.c  | 17196 +++++
 .../gcc.target/riscv/rvv/intrinsic/vsra.c     | 31819 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsrl.c     | 31819 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vssex.c    |  4776 ++
 .../gcc.target/riscv/rvv/intrinsic/vssra.c    | 31819 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vssrl.c    | 31819 ++++++++
 .../gcc.target/riscv/rvv/intrinsic/vssub.c    | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vssubu.c   | 15847 ++++
 .../gcc.target/riscv/rvv/intrinsic/vsub.c     | 59408 +++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vsuxeix.c  | 17196 +++++
 .../gcc.target/riscv/rvv/intrinsic/vwadd.c    | 21607 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vwaddu.c   | 21607 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vwcvt.c    |  5407 ++
 .../gcc.target/riscv/rvv/intrinsic/vwcvtu.c   |  5407 ++
 .../gcc.target/riscv/rvv/intrinsic/vwmacc.c   | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vwmaccsu.c | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vwmaccu.c  | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vwmaccus.c |  5407 ++
 .../gcc.target/riscv/rvv/intrinsic/vwmul.c    | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vwmulsu.c  | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vwmulu.c   | 10807 +++
 .../gcc.target/riscv/rvv/intrinsic/vwredsum.c |  4867 ++
 .../riscv/rvv/intrinsic/vwredsumu.c           |  4867 ++
 .../gcc.target/riscv/rvv/intrinsic/vwsub.c    | 21607 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vwsubu.c   | 21607 ++++++
 .../gcc.target/riscv/rvv/intrinsic/vxor.c     | 63631 ++++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/vzext.c    | 10087 +++
 .../gcc.target/riscv/rvv/stack/rvv-stack.exp  |    47 +
 .../rvv/stack/stack-check-alloca-scalar.c     |    53 +
 .../rvv/stack/stack-check-alloca-vector.c     |    45 +
 .../stack/stack-check-save-restore-scalar.c   |    48 +
 .../stack/stack-check-save-restore-vector.c   |    62 +
 .../riscv/rvv/stack/stack-check-scalar.c      |   205 +
 .../rvv/stack/stack-check-vararg-scalar.c     |    33 +
 .../riscv/rvv/stack/stack-check-vector_1.c    |   277 +
 .../riscv/rvv/stack/stack-check-vector_2.c    |   141 +
 1409 files changed, 3197907 insertions(+), 199 deletions(-)
 create mode 100644 gcc/config/riscv/md-parser
 create mode 100644 gcc/config/riscv/riscv-insert-vsetvl.cc
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-functions.cc
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-functions.h
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-iterators.def
 create mode 100644 gcc/config/riscv/riscv-vector-builtins.cc
 create mode 100644 gcc/config/riscv/riscv-vector-builtins.def
 create mode 100644 gcc/config/riscv/riscv-vector-builtins.h
 create mode 100644 gcc/config/riscv/riscv-vector.cc
 create mode 100644 gcc/config/riscv/riscv-vector.h
 create mode 100644 gcc/config/riscv/riscv_vector.h
 create mode 100644 gcc/config/riscv/vector-iterators.md
 create mode 100644 gcc/config/riscv/vector.md
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/misc_func.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/rvv-intrinsic.exp
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/set-get.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vaadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vaaddu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vadc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vand.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vasub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vasubu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vcompress.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vcpop.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vdiv.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vdivu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfabs.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfclass.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfcvt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfdiv.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfirst.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmacc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmax.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmerge.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmin.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmsac.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmul.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfmv.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfncvt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfneg.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfnmacc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfnmadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfnmsac.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfnmsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfrdiv.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfrec7.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfredmax.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfredmin.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfredosum.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfredusum.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfrsqrt7.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfrsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfsgnj.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfsgnjn.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfsgnjx.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfslide1down.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfslide1up.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfsqrt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwcvt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwmacc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwmsac.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwmul.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwnmacc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwnmsac.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwredosum.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwredusum.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vfwsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vid.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/viota.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vlex_1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vlexff_1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vloxeix_1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vloxeix_2.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vloxeix_3.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vloxeix_4.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vlsex_1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vluxeix_1.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vluxeix_2.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vluxeix_3.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vluxeix_4.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmacc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmadc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmand.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmandn.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmax.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmaxu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmclr.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmerge.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmfeq.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmfge.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmfgt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmfle.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmflt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmfne.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmin.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vminu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmmv.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmnand.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmnor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmnot.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmorn.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsbc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsbf.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmseq.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmset.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsge.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsgeu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsgt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsgtu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsif.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsle.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsleu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmslt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsltu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsne.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmsof.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmul.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmulh.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmulhsu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmulhu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmv.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmv1r.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmv2r.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmv4r.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmv8r.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmxnor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vmxor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnclip.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnclipu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vncvt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vneg.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnmsac.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnmsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnot.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnsra.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vnsrl.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredand.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredmax.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredmaxu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredmin.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredminu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredsum.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vredxor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vrem.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vremu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vrgather.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vrgatherei16.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vrsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsaddu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsbc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsetvl.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsex.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsext.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vslide1down.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vslide1up.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vslidedown.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vslideup.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsll.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsmul.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsoxeix.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsra.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsrl.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vssex.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vssra.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vssrl.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vssub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vssubu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vsuxeix.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwadd.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwaddu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwcvt.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwcvtu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmacc.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmaccsu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmaccu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmaccus.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmul.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmulsu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwmulu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwredsum.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwredsumu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwsub.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vwsubu.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vxor.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/vzext.C
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/custom/calling_convention_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/custom/rvv-custom.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/rvv-e64-on-rv32.h
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/rvv-intrinsic-execute.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vadd.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vand.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmax.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmax.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmax.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmax.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmaxu.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmaxu.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmaxu.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmaxu.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmin.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmin.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmin.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmin.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vminu.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vminu.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vminu.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vminu.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.s.x_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m1_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m2_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m4_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_i_sew64_on_rv32_m8_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m1_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m2_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m4_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vmv.v.x_u_sew64_on_rv32_m8_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vor.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vrsub.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m1_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m2_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m4_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_i_sew64_on_rv32_m8_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m1_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m2_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m4_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1down.vx_u_sew64_on_rv32_m8_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m1_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m2_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m4_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_i_sew64_on_rv32_m8_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m1_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m2_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m4_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vslide1up.vx_u_sew64_on_rv32_m8_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vsub.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_i_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_i_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_i_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_i_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_u_sew64_on_rv32_m1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_u_sew64_on_rv32_m2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_u_sew64_on_rv32_m4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-excute/vx-sew64/vxor.vx_u_sew64_on_rv32_m8.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/rvv-intrinsic-rv32-imm32bit.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaaddu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaaddu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaaddu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vaaddu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadc.vxm_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadc.vxm_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vadd.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vand.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasubu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasubu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasubu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vasubu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdiv.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdiv.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdiv.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdiv.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdivu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdivu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdivu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vdivu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmacc.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadc.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadc.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmadd.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmax.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmax.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmax.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmax.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmaxu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmaxu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmaxu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmaxu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmerge.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmerge.vxm_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmerge.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmerge.vxm_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmin.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmin.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmin.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmin.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vminu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vminu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vminu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vminu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsbc.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsbc.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsbc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsbc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmseq.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmseq.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmseq.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmseq.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsge.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsge.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsgeu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsgeu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsgt.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsgt.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsgtu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsgtu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsle.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsle.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsleu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsleu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmslt.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmslt.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsltu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsltu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsne.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsne.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsne.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmsne.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmul.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulh.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulh.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulh.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulh.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhsu.vx_iu_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhsu.vx_iu_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhsu.vx_iu_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhsu.vx_iu_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmulhu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.s.x_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.s.x_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.s.x_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.s.x_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.v.x_i_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.v.x_i_tx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.v.x_u_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vmv.v.x_u_tx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsac.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vnmsub.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vor.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrem.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrem.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrem.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrem.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vremu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vremu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vremu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vremu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vrsub.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsaddu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsaddu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsaddu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsaddu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsbc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsbc.vxm_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsbc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsbc.vxm_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_i_mask_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_i_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_i_tx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_i_txmx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_u_mask_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_u_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_u_tx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1down.vx_u_txmx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_i_mask_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_i_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_i_tx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_i_txmx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_u_mask_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_u_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_u_tx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vslide1up.vx_u_txmx_sew64_on_rv32_e64.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsmul.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsmul.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsmul.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsmul.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssubu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssubu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssubu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vssubu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vsub.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32-imm32bit/vxor.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/rvv-intrinsic-rv32.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaadd.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vaaddu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadc.vxm_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vadd.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vand.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasub.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vasubu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdiv.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vdivu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmacc.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vxm_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vxm_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vxm_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadc.vxm_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmadd.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmax.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmaxu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmerge.vxm_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmin.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vminu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vxm_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vxm_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vxm_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsbc.vxm_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmseq.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsge.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsge.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsge.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsge.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsge.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsge.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgeu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgeu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgeu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgeu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgeu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgeu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgt.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgt.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgt.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgt.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgt.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgt.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgtu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgtu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgtu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgtu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgtu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsgtu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsle.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsle.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsle.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsle.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsle.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsle.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsleu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsleu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsleu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsleu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsleu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsleu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmslt.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmslt.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmslt.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmslt.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmslt.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmslt.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsltu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsltu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsltu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsltu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsltu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsltu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmsne.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmul.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulh.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhsu.vx_iu_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vmulhu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsac.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vnmsub.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vor.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrem.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vremu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_mask_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_mask_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_mask_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_tx_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_tx_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_tx_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_txmx_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_txmx_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_i_txmx_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_mask_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_mask_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_mask_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_tx_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_tx_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_tx_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_txmx_pseudo_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_txmx_pseudo_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vrsub.vx_u_txmx_pseudo_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsadd.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsaddu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsbc.vxm_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsmul.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssub.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vssubu.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vsub.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_i_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_mask_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_mask_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_mask_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_tx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_tx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_tx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_txmx_sew64_on_rv32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_txmx_sew64_on_rv32_vl31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic-rv32/vxor.vx_u_txmx_sew64_on_rv32_vl32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/mask_load_store.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/mask_load_store_31.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/mask_load_store_32.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/misc_func.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/rvv-intrinsic.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/set-get.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vaadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vaaddu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vadc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vand.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vasub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vasubu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vcompress.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vcpop.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vdiv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vdivu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfabs.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfclass.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfcvt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfdiv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfirst.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmacc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmax.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmerge.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmin.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmsac.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfmv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfncvt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfneg.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfnmacc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfnmadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfnmsac.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfnmsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfrdiv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfrec7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfredmax.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfredmin.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfredosum.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfredusum.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfrsqrt7.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfrsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfsgnj.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfsgnjn.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfsgnjx.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfslide1down.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfslide1up.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfsqrt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwcvt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwmacc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwmsac.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwnmacc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwnmsac.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwredosum.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwredusum.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vfwsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vid.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/viota.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vlex_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vlex_2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vlexff_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vlexff_2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vloxeix_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vloxeix_2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vloxeix_3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vloxeix_4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vlsex_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vlsex_2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vluxeix_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vluxeix_2.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vluxeix_3.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vluxeix_4.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmacc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmadc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmand.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmandn.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmax.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmaxu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmclr.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmerge.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmfeq.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmfge.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmfgt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmfle.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmflt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmfne.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmin.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vminu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmmv.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmnand.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmnor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmnot.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmorn.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsbc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsbf.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmseq.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmset.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsge.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsgeu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsgt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsgtu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsif.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsle.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsleu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmslt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsltu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsne.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmsof.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmulh.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmulhsu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmulhu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmv1r.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmv2r.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmv4r.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmv8r.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmxnor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vmxor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnclip.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnclipu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vncvt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vneg.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnmsac.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnmsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnot.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnsra.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vnsrl.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredand.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredmax.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredmaxu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredmin.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredminu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredsum.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vredxor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vrem.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vremu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vrgather.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vrgatherei16.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vrsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsaddu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsbc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsetvl.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsex.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsext.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vslide1down.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vslide1up.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vslidedown.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vslideup.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsll.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsoxeix.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsra.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsrl.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vssex.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vssra.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vssrl.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vssub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vssubu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsuxeix.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwadd.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwaddu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwcvt.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwcvtu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmacc.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmaccsu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmaccu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmaccus.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmul.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmulsu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwmulu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwredsum.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwredsumu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwsub.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vwsubu.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vxor.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vzext.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/rvv-stack.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-vector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-vector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vararg-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_2.c

-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/21]   Add RVV modes and support scalable vector
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 02/21] Add RVV intrinsic framework juzhe.zhong
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config.gcc: Add riscv-vector.o extra_objs for RVV support.
        * config/riscv/constraints.md: New constraints.
        * config/riscv/predicates.md: New predicates.
        * config/riscv/riscv-modes.def: New machine mode.
        * config/riscv/riscv-opts.h: New enum.
        * config/riscv/riscv-protos.h: New functions declare.
        * config/riscv/riscv-sr.cc (riscv_remove_unneeded_save_restore_calls): Adjust for poly_int.
        * config/riscv/riscv.cc (struct riscv_frame_info): Change HOST_WIDE_INT to poly_int64.
        (poly_uint16 riscv_vector_chunks): New declare.
        (riscv_legitimate_constant_p): Adjust for poly_int.
        (riscv_cannot_force_const_mem): Adjust for poly_int.
        (riscv_valid_offset_p): Adjust for poly_int.
        (riscv_valid_lo_sum_p): Adjust for poly_int.
        (riscv_classify_address): Disallow PLUS, LO_SUM and CONST_INT memory address for RVV.
        (riscv_address_insns): Adjust for poly_int.
        (riscv_const_insns): Adjust for poly_int.
        (riscv_load_store_insns): Adjust for poly_int.
        (riscv_legitimize_move): Adjust for poly_int.
        (riscv_binary_cost): Adjust for poly_int.
        (riscv_rtx_costs): Adjust for poly_int.
        (riscv_output_move): Adjust for poly_int.
        (riscv_extend_comparands): Adjust for poly_int.
        (riscv_flatten_aggregate_field): Adjust for poly_int.
        (riscv_get_arg_info): Adjust for poly_int.
        (riscv_pass_by_reference): Adjust for poly_int.
        (riscv_elf_select_rtx_section): Adjust for poly_int.
        (riscv_stack_align): Adjust for poly_int.
        (riscv_compute_frame_info): Adjust for poly_int.
        (riscv_initial_elimination_offset): Change HOST_WIDE_INT to poly_int64.
        (riscv_set_return_address): Adjust for poly_int.
        (riscv_for_each_saved_reg): Adjust for poly_int.
        (riscv_first_stack_step): Adjust for poly_int.
        (riscv_expand_prologue): Adjust for poly_int.
        (riscv_expand_epilogue): Adjust for poly_int.
        (riscv_can_use_return_insn): Adjust for poly_int.
        (riscv_secondary_memory_needed): Disable secondary memory for RVV.
        (riscv_hard_regno_nregs): Add RVV register allocation.
        (riscv_hard_regno_mode_ok): Add RVV register allocation.
        (riscv_convert_riscv_vector_bits): New function.
        (riscv_option_override): Add RVV vector bits parser.
        (riscv_promote_function_mode): Adjust for RVV modes.
        * config/riscv/riscv.h: New macro define.
        * config/riscv/riscv.md: Adjust for poly_int.
        * config/riscv/riscv.opt: New option.
        * config/riscv/t-riscv: New object.
        * config/riscv/riscv-vector.cc: New file.
        * config/riscv/riscv-vector.h: New file.
        
---
 gcc/config.gcc                   |   2 +-
 gcc/config/riscv/constraints.md  |  17 ++
 gcc/config/riscv/predicates.md   |   5 +-
 gcc/config/riscv/riscv-modes.def | 177 ++++++++++++++++++
 gcc/config/riscv/riscv-opts.h    |  27 +++
 gcc/config/riscv/riscv-protos.h  |   9 +-
 gcc/config/riscv/riscv-sr.cc     |   2 +-
 gcc/config/riscv/riscv-vector.cc | 229 +++++++++++++++++++++++
 gcc/config/riscv/riscv-vector.h  |  28 +++
 gcc/config/riscv/riscv.cc        | 302 +++++++++++++++++++++++--------
 gcc/config/riscv/riscv.h         |  84 +++++++--
 gcc/config/riscv/riscv.md        |  36 ++--
 gcc/config/riscv/riscv.opt       |  32 ++++
 gcc/config/riscv/t-riscv         |   4 +
 14 files changed, 849 insertions(+), 105 deletions(-)
 create mode 100644 gcc/config/riscv/riscv-vector.cc
 create mode 100644 gcc/config/riscv/riscv-vector.h

diff --git a/gcc/config.gcc b/gcc/config.gcc
index cdbefb5b4f5..50154c2eb3a 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -517,7 +517,7 @@ pru-*-*)
 	;;
 riscv*)
 	cpu_type=riscv
-	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o"
+	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-vector.o"
 	d_target_objs="riscv-d.o"
 	;;
 rs6000*-*-*)
diff --git a/gcc/config/riscv/constraints.md b/gcc/config/riscv/constraints.md
index bafa4188ccb..7fd61a04216 100644
--- a/gcc/config/riscv/constraints.md
+++ b/gcc/config/riscv/constraints.md
@@ -80,3 +80,20 @@
    A constant @code{move_operand}."
   (and (match_operand 0 "move_operand")
        (match_test "CONSTANT_P (op)")))
+
+;; Vector constraints.
+
+(define_register_constraint "vr" "TARGET_VECTOR ? V_REGS : NO_REGS"
+  "A vector register (if available).")
+
+(define_register_constraint "vd" "TARGET_VECTOR ? VD_REGS : NO_REGS"
+  "A vector register except mask register (if available).")
+
+(define_register_constraint "vm" "TARGET_VECTOR ? VM_REGS : NO_REGS"
+  "A vector mask register (if available).")
+
+(define_constraint "vp"
+  "POLY_INT"
+  (and (match_code "const_poly_int")
+       (match_test "CONST_POLY_INT_COEFFS (op)[0] == UNITS_PER_V_REG.coeffs[0]
+        && CONST_POLY_INT_COEFFS (op)[1] == UNITS_PER_V_REG.coeffs[1]")))
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index c37caa2502b..6328cfff367 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -71,7 +71,7 @@
 {
   /* Don't handle multi-word moves this way; we don't want to introduce
      the individual word-mode moves until after reload.  */
-  if (GET_MODE_SIZE (mode) > UNITS_PER_WORD)
+  if (GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD)
     return false;
 
   /* Check whether the constant can be loaded in a single
@@ -145,6 +145,9 @@
     {
     case CONST_INT:
       return !splittable_const_int_operand (op, mode);
+    
+    case CONST_POLY_INT:
+      return rvv_legitimate_poly_int_p (op);
 
     case CONST:
     case SYMBOL_REF:
diff --git a/gcc/config/riscv/riscv-modes.def b/gcc/config/riscv/riscv-modes.def
index 653228409a4..88cf9551727 100644
--- a/gcc/config/riscv/riscv-modes.def
+++ b/gcc/config/riscv/riscv-modes.def
@@ -20,3 +20,180 @@ along with GCC; see the file COPYING3.  If not see
 <http://www.gnu.org/licenses/>.  */
 
 FLOAT_MODE (TF, 16, ieee_quad_format);
+
+/* Vector modes.  */
+
+/* Encode the ratio of SEW/LMUL into the mask types. There are the following mask types.  */
+
+/* | Type      | Mode     | SEW/LMUL |
+   | vbool64_t | VNx2BI   | 64       |
+   | vbool32_t | VNx4BI   | 32       |
+   | vbool16_t | VNx8BI   | 16       |
+   | vbool8_t  | VNx16BI  | 8        |
+   | vbool4_t  | VNx32BI  | 4        |
+   | vbool2_t  | VNx64BI  | 2        |
+   | vbool1_t  | VNx128BI | 1        |  */
+
+VECTOR_BOOL_MODE (VNx2BI, 2, BI, 16);
+VECTOR_BOOL_MODE (VNx4BI, 4, BI, 16);
+VECTOR_BOOL_MODE (VNx8BI, 8, BI, 16);
+VECTOR_BOOL_MODE (VNx16BI, 16, BI, 16);
+VECTOR_BOOL_MODE (VNx32BI, 32, BI, 16);
+VECTOR_BOOL_MODE (VNx64BI, 64, BI, 16);
+VECTOR_BOOL_MODE (VNx128BI, 128, BI, 16);
+
+ADJUST_NUNITS (VNx2BI, riscv_vector_chunks * 1);
+ADJUST_NUNITS (VNx4BI, riscv_vector_chunks * 2);
+ADJUST_NUNITS (VNx8BI, riscv_vector_chunks * 4);
+ADJUST_NUNITS (VNx16BI, riscv_vector_chunks * 8);
+ADJUST_NUNITS (VNx32BI, riscv_vector_chunks * 16);
+ADJUST_NUNITS (VNx64BI, riscv_vector_chunks * 32);
+ADJUST_NUNITS (VNx128BI, riscv_vector_chunks * 64);
+
+ADJUST_ALIGNMENT (VNx2BI, 1);
+ADJUST_ALIGNMENT (VNx4BI, 1);
+ADJUST_ALIGNMENT (VNx8BI, 1);
+ADJUST_ALIGNMENT (VNx16BI, 1);
+ADJUST_ALIGNMENT (VNx32BI, 1);
+ADJUST_ALIGNMENT (VNx64BI, 1);
+ADJUST_ALIGNMENT (VNx128BI, 1);
+
+ADJUST_BYTESIZE (VNx2BI, riscv_vector_chunks * 8);
+ADJUST_BYTESIZE (VNx4BI, riscv_vector_chunks * 8);
+ADJUST_BYTESIZE (VNx8BI, riscv_vector_chunks * 8);
+ADJUST_BYTESIZE (VNx16BI, riscv_vector_chunks * 8);
+ADJUST_BYTESIZE (VNx32BI, riscv_vector_chunks * 8);
+ADJUST_BYTESIZE (VNx64BI, riscv_vector_chunks * 8);
+ADJUST_BYTESIZE (VNx128BI, riscv_vector_chunks * 8);
+
+/* Define RVV modes for NVECS vectors.  VB, VH, VS and VD are the prefixes
+   for 8-bit, 16-bit, 32-bit and 64-bit elements respectively.  It isn't
+   strictly necessary to set the alignment here, since the default would
+   be clamped to BIGGEST_ALIGNMENT anyhow, but it seems clearer.  */
+   
+/* TODO:Because there is no 'Zfh' on the upstream GCC, we will support
+   vector mode with 16-bit half-precision floating-point in the next patch 
+   after 'Zfh' is supported in the GCC upstream. */
+
+/* | Type                    | Mode     | SEW/LMUL |
+   | vint8m1_t/vuint8m1_t    | VNx16QI  | 8        |
+   | vint8m2_t/vuint8m2_t    | VNx32QI  | 4        |
+   | vint8m4_t/vuint8m4_t    | VNx64QI  | 2        |
+   | vint8m8_t/vuint8m8_t    | VNx128QI | 1        |
+   | vint16m1_t/vint16m1_t   | VNx8HI   | 16       |
+   | vint16m2_t/vint16m2_t   | VNx16HI  | 8        |
+   | vint16m4_t/vint16m4_t   | VNx32HI  | 4        |
+   | vint16m8_t/vint16m8_t   | VNx64HI  | 2        |
+   | vint32m1_t/vint32m1_t   | VNx4SI   | 32       |
+   | vint32m2_t/vint32m2_t   | VNx8SI   | 16       |
+   | vint32m4_t/vint32m4_t   | VNx16SI  | 8        |
+   | vint32m8_t/vint32m8_t   | VNx32SI  | 4        |
+   | vint64m1_t/vint64m1_t   | VNx2DI   | 64       |
+   | vint64m2_t/vint64m2_t   | VNx4DI   | 32       |
+   | vint64m4_t/vint64m4_t   | VNx8DI   | 16       |
+   | vint64m8_t/vint64m8_t   | VNx16DI  | 8        |
+   | vfloat32m1_t            | VNx4SF   | 32       |
+   | vfloat32m2_t            | VNx8SF   | 16       |
+   | vfloat32m4_t            | VNx16SF  | 8        |
+   | vfloat32m8_t            | VNx32SF  | 4        |
+   | vfloat64m1_t            | VNx2DF   | 64       |
+   | vfloat64m2_t            | VNx4DF   | 32       |
+   | vfloat64m4_t            | VNx8DF   | 16       |
+   | vfloat64m8_t            | VNx16DF  | 8        |  */
+
+#define RVV_MODES(NVECS, VB, VH, VS, VD) \
+  VECTOR_MODES_WITH_PREFIX (VNx, INT, 16 * NVECS, 0); \
+  VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 16 * NVECS, 0); \
+  \
+  ADJUST_NUNITS (VB##QI, riscv_vector_chunks * NVECS * 8); \
+  ADJUST_NUNITS (VH##HI, riscv_vector_chunks * NVECS * 4); \
+  ADJUST_NUNITS (VS##SI, riscv_vector_chunks * NVECS * 2); \
+  ADJUST_NUNITS (VD##DI, riscv_vector_chunks * NVECS); \
+  ADJUST_NUNITS (VS##SF, riscv_vector_chunks * NVECS * 2); \
+  ADJUST_NUNITS (VD##DF, riscv_vector_chunks * NVECS); \
+  \
+  ADJUST_ALIGNMENT (VB##QI, 1); \
+  ADJUST_ALIGNMENT (VH##HI, 2); \
+  ADJUST_ALIGNMENT (VS##SI, 4); \
+  ADJUST_ALIGNMENT (VD##DI, 8); \
+  ADJUST_ALIGNMENT (VS##SF, 4); \
+  ADJUST_ALIGNMENT (VD##DF, 8);
+
+/* Give vectors the names normally used for 128-bit vectors.
+   The actual number depends on command-line flags.  */
+RVV_MODES (1, VNx16, VNx8, VNx4, VNx2)
+RVV_MODES (2, VNx32, VNx16, VNx8, VNx4)
+RVV_MODES (4, VNx64, VNx32, VNx16, VNx8)
+RVV_MODES (8, VNx128, VNx64, VNx32, VNx16)
+
+/* Partial RVV vectors:
+
+      VNx8QI VNx4HI VNx2SI VNx2SF
+      VNx4QI VNx2HI
+      VNx2QI
+
+   In memory they occupy contiguous locations, in the same way as fixed-length
+   vectors.  E.g. VNx8QImode is half the size of VNx16QImode.
+
+   Passing 1 as the final argument ensures that the modes come after all
+   other modes in the GET_MODE_WIDER chain, so that we never pick them
+   in preference to a full vector mode.  */
+
+/* TODO:Because there is no 'Zfh' on the upstream GCC, we will support
+   vector mode with 16-bit half-precision floating-point in the another patch 
+   after 'Zfh' is supported in the GCC upstream. */
+
+/* | Type                     | Mode    | SEW/LMUL |
+   | vint8mf2_t/vuint8mf2_t   | VNx8QI  | 16       |
+   | vint8mf4_t/vuint8mf4_t   | VNx4QI  | 32       |
+   | vint8mf8_t/vuint8mf8_t   | VNx2QI  | 64       |
+   | vint16mf2_t/vuint16mf2_t | VNx4HI  | 32       |
+   | vint16mf4_t/vuint16mf4_t | VNx2HI  | 64       |
+   | vint32mf2_t/vuint32mf2_t | VNx2SI  | 64       |
+   | vfloat32mf2_t            | VNx2SF  | 64       |  */
+   
+VECTOR_MODES_WITH_PREFIX (VNx, INT, 2, 1);
+VECTOR_MODES_WITH_PREFIX (VNx, INT, 4, 1);
+VECTOR_MODES_WITH_PREFIX (VNx, INT, 8, 1);
+VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 4, 1);
+VECTOR_MODES_WITH_PREFIX (VNx, FLOAT, 8, 1);
+
+ADJUST_NUNITS (VNx2QI, riscv_vector_chunks);
+ADJUST_NUNITS (VNx2HI, riscv_vector_chunks);
+ADJUST_NUNITS (VNx2SI, riscv_vector_chunks);
+ADJUST_NUNITS (VNx2SF, riscv_vector_chunks);
+
+ADJUST_NUNITS (VNx4QI, riscv_vector_chunks * 2);
+ADJUST_NUNITS (VNx4HI, riscv_vector_chunks * 2);
+
+ADJUST_NUNITS (VNx8QI, riscv_vector_chunks * 4);
+
+ADJUST_ALIGNMENT (VNx2QI, 1);
+ADJUST_ALIGNMENT (VNx4QI, 1);
+ADJUST_ALIGNMENT (VNx8QI, 1);
+
+ADJUST_ALIGNMENT (VNx2HI, 2);
+ADJUST_ALIGNMENT (VNx4HI, 2);
+
+ADJUST_ALIGNMENT (VNx2SI, 4);
+ADJUST_ALIGNMENT (VNx2SF, 4);
+
+/* TODO:To support segment instructions, we need a new mode definition for tuple
+   mode in gcc/gcc/genmodes.cc which is not the codes in RISC-V port.
+   We support it in the another patch. */
+
+/* A 8-tuple of RVV vectors with the maximum -mriscv-vector-bits= setting.
+   Note that this is a limit only on the compile-time sizes of modes;
+   it is not a limit on the runtime sizes, since VL-agnostic code
+   must work with arbitary vector lengths.  */
+/* TODO:According to RISC-V 'V' ISA spec, the maximun vector length can
+   be 65536 for a single vector register which means the vector mode in
+   GCC can be maximum = 65536 * 8 bits (nf=8). However, 'GET_MODE_SIZE'
+   is using poly_uint16/unsigned short which will overflow if we specify
+   vector-length = 65536. To support this feature, we need to change the
+   codes outside the RISC-V port. We will support it in another patch. */    
+#define MAX_BITSIZE_MODE_ANY_MODE (4096 * 8)
+
+/* Coefficient 1 is multiplied by the number of 64-bit chunks in a vector
+   minus one.  */
+#define NUM_POLY_INT_COEFFS 2
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-opts.h b/gcc/config/riscv/riscv-opts.h
index 1e153b3a6e7..d99b8dcbaf1 100644
--- a/gcc/config/riscv/riscv-opts.h
+++ b/gcc/config/riscv/riscv-opts.h
@@ -67,6 +67,33 @@ enum stack_protector_guard {
   SSP_GLOBAL			/* global canary */
 };
 
+/* RVV vector register sizes.  */
+enum riscv_vector_bits_enum
+{
+  RVV_SCALABLE,
+  RVV_NOT_IMPLEMENTED = RVV_SCALABLE,
+  RVV_64 = 64,
+  RVV_128 = 128,
+  RVV_256 = 256,
+  RVV_512 = 512,
+  RVV_1024 = 1024,
+  RVV_2048 = 2048,
+  RVV_4096 = 4096
+};
+
+enum vlmul_field_enum
+{
+  VLMUL_FIELD_000, /* LMUL = 1 */
+  VLMUL_FIELD_001, /* LMUL = 2 */
+  VLMUL_FIELD_010, /* LMUL = 4 */
+  VLMUL_FIELD_011, /* LMUL = 8 */
+  VLMUL_FIELD_100, /* RESERVED */
+  VLMUL_FIELD_101, /* LMUL = 1/8 */
+  VLMUL_FIELD_110, /* LMUL = 1/4 */
+  VLMUL_FIELD_111, /* LMUL = 1/2 */
+  MAX_VLMUL_FIELD
+};
+
 #define MASK_ZICSR    (1 << 0)
 #define MASK_ZIFENCEI (1 << 1)
 
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 20c2381c21a..19c50f0e702 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -64,7 +64,7 @@ extern rtx riscv_legitimize_call_address (rtx);
 extern void riscv_set_return_address (rtx, rtx);
 extern bool riscv_expand_block_move (rtx, rtx, rtx);
 extern rtx riscv_return_addr (int, rtx);
-extern HOST_WIDE_INT riscv_initial_elimination_offset (int, int);
+extern poly_int64 riscv_initial_elimination_offset (int, int);
 extern void riscv_expand_prologue (void);
 extern void riscv_expand_epilogue (int);
 extern bool riscv_epilogue_uses (unsigned int);
@@ -109,4 +109,11 @@ struct riscv_cpu_info {
 
 extern const riscv_cpu_info *riscv_find_cpu (const char *);
 
+/* Routines implemented in riscv-vector.cc. */
+extern bool rvv_mode_p (machine_mode);
+extern bool rvv_legitimate_poly_int_p (rtx);
+extern unsigned int rvv_offset_temporaries (bool, poly_int64);
+extern enum vlmul_field_enum riscv_classify_vlmul_field (machine_mode);
+extern int rvv_regsize (machine_mode);
+
 #endif /* ! GCC_RISCV_PROTOS_H */
diff --git a/gcc/config/riscv/riscv-sr.cc b/gcc/config/riscv/riscv-sr.cc
index 694f90c1583..7248f04d68f 100644
--- a/gcc/config/riscv/riscv-sr.cc
+++ b/gcc/config/riscv/riscv-sr.cc
@@ -247,7 +247,7 @@ riscv_remove_unneeded_save_restore_calls (void)
   /* We'll adjust stack size after this optimization, that require update every
      sp use site, which could be unsafe, so we decide to turn off this
      optimization if there are any arguments put on stack.  */
-  if (crtl->args.size != 0)
+  if (known_ne (crtl->args.size, 0))
     return;
 
   /* Will point to the first instruction of the function body, after the
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
new file mode 100644
index 00000000000..e315b5d2cac
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -0,0 +1,229 @@
+/* Subroutines used for code generation for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#define IN_TARGET_CODE 1
+#define INCLUDE_STRING
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "backend.h"
+#include "rtl.h"
+#include "regs.h"
+#include "insn-config.h"
+#include "insn-attr.h"
+#include "recog.h"
+#include "output.h"
+#include "alias.h"
+#include "tree.h"
+#include "stringpool.h"
+#include "attribs.h"
+#include "varasm.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "function.h"
+#include "explow.h"
+#include "memmodel.h"
+#include "emit-rtl.h"
+#include "reload.h"
+#include "tm_p.h"
+#include "target.h"
+#include "basic-block.h"
+#include "expr.h"
+#include "optabs.h"
+#include "bitmap.h"
+#include "df.h"
+#include "diagnostic.h"
+#include "builtins.h"
+#include "predict.h"
+#include "tree-pass.h"
+#include "opts.h"
+#include "langhooks.h"
+#include "rtl-iter.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "cfgloop.h"
+#include "fold-const.h"
+#include "gimple-iterator.h"
+#include "tree-vectorizer.h"
+#include "tree-ssa-loop-niter.h"
+#include "rtx-vector-builder.h"
+#include "riscv-vector.h"
+/* This file should be included last.  */
+#include "target-def.h"
+
+#include <string.h>
+/* Helper functions for RVV */
+
+/* Return true if it is a RVV mask mode. */
+bool
+rvv_mask_mode_p (machine_mode mode)
+{
+  if (VECTOR_MODE_P (mode) 
+	    && GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL
+	    && strncmp (GET_MODE_NAME (mode), "VNx", 3) == 0)
+	  return true;
+  
+  return false;
+}
+
+/* Return true if it is a RVV vector mode. */
+bool
+rvv_vector_mode_p (machine_mode mode)
+{
+  if (VECTOR_MODE_P (mode) 
+	    && (GET_MODE_CLASS (mode) == MODE_VECTOR_INT
+	    || GET_MODE_CLASS (mode) == MODE_VECTOR_FLOAT)
+	    && strncmp (GET_MODE_NAME (mode), "VNx", 3) == 0
+      /* So far we only support SEW <= 64 RVV mode. */
+      && GET_MODE_BITSIZE (GET_MODE_INNER (mode)) <= 64)
+	  return true;
+  
+  return false;
+}
+
+/* Return true if it is a RVV mode. */
+bool
+rvv_mode_p (machine_mode mode)
+{
+  return rvv_mask_mode_p (mode) || rvv_vector_mode_p (mode);
+}
+
+/* Return true if it is a const poly int whose size is equal to a LMUL = 1 RVV vector. */
+bool
+rvv_legitimate_poly_int_p (rtx x)
+{
+  poly_int64 value = rtx_to_poly_int64 (x);
+
+  HOST_WIDE_INT factor = value.coeffs[0];
+  return (value.coeffs[1] == factor && factor == UNITS_PER_V_REG.coeffs[0]);
+}
+
+/* Return the number of temporary registers that riscv_add_offset_1
+   would need to add OFFSET to a register.  */
+
+static unsigned int
+rvv_add_offset_1_temporaries (HOST_WIDE_INT offset)
+{
+  return SMALL_OPERAND (offset) ? 0 : 1;
+}
+
+/* Return the number of temporary registers that riscv_add_offset
+   would need to move OFFSET into a register or add OFFSET to a register;
+   ADD_P is true if we want the latter rather than the former.  */
+
+unsigned int
+rvv_offset_temporaries (bool add_p, poly_int64 offset)
+{
+  /* This follows the same structure as riscv_add_offset.  */
+  if (add_p && rvv_legitimate_poly_int_p (gen_int_mode (offset, Pmode)))
+    return 0;
+
+  unsigned int count = 0;
+  HOST_WIDE_INT factor = offset.coeffs[1];
+  HOST_WIDE_INT constant = offset.coeffs[0] - factor;
+  poly_int64 poly_offset (factor, factor);
+  if (add_p && rvv_legitimate_poly_int_p (gen_int_mode (offset, Pmode)))
+    /* Need one register for the csrr vlenb result.  */
+    count += 1;
+  else if (factor != 0)
+    {
+      factor = abs (factor);
+      if (!rvv_legitimate_poly_int_p (gen_int_mode (poly_offset, Pmode)))
+        /* Need one register for the CNT result and one for the multiplication
+           factor.  If necessary, the second temporary can be reused for the
+           constant part of the offset.  */
+        return 2;
+      /* Need one register for the CNT result (which might then
+         be shifted).  */
+      count += 1;
+    }
+  return count + rvv_add_offset_1_temporaries (constant);
+}
+
+/* Return the vlmul field for a specific machine mode. */
+
+enum vlmul_field_enum
+rvv_classify_vlmul_field (machine_mode mode)
+{
+	/* Case 1: LMUL = 1. */
+  if (known_eq (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR))
+    return VLMUL_FIELD_000;
+  
+	/* Case 2: Fractional LMUL. */
+  if (known_gt (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR))
+    {
+	unsigned int factor = exact_div (GET_MODE_SIZE (mode), 
+	                           BYTES_PER_RISCV_VECTOR).to_constant ();
+	switch (factor)
+	  {
+	  case 2:
+	    return VLMUL_FIELD_001;
+	  case 4:
+	    return VLMUL_FIELD_010;		
+	  case 8:
+	    return VLMUL_FIELD_011;
+	  default:
+	    gcc_unreachable ();
+	  }
+    }
+	
+	/* Case 3: Fractional LMUL. */
+  if (known_lt (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR))
+    {
+	unsigned int factor = exact_div (BYTES_PER_RISCV_VECTOR, 
+	                           GET_MODE_SIZE (mode)).to_constant ();
+	switch (factor)
+	  {
+	  case 2:
+	    return VLMUL_FIELD_111;
+	  case 4:
+	    return VLMUL_FIELD_110;		
+	  case 8:
+	    return VLMUL_FIELD_101;
+	  default:
+	    gcc_unreachable ();
+	  }
+    }
+	gcc_unreachable ();
+}
+
+/* Return vlmul register size for a machine mode. */
+
+int
+rvv_regsize (machine_mode mode)
+{
+  if (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL)
+    return 1;
+		
+  switch (rvv_classify_vlmul_field (mode))
+    {
+    case VLMUL_FIELD_001:
+		  return 2;
+    case VLMUL_FIELD_010:
+		  return 4;
+    case VLMUL_FIELD_011:
+      return 8;
+    default:
+      break;
+    }
+	
+	return 1;
+}
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector.h b/gcc/config/riscv/riscv-vector.h
new file mode 100644
index 00000000000..b8d77ddb195
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector.h
@@ -0,0 +1,28 @@
+/* Definition of RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RISCV_VECTOR_H
+#define GCC_RISCV_VECTOR_H
+bool riscv_vector_mode_p (machine_mode);
+bool rvv_legitimate_poly_int_p (rtx);
+unsigned int rvv_offset_temporaries (bool, poly_int64);
+vlmul_field_enum rvv_classify_vlmul_field (machine_mode);
+int rvv_regsize (machine_mode);
+#endif // GCC_RISCV_VECTOR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index f83dc796d88..37d8f1271d4 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -57,6 +57,16 @@ along with GCC; see the file COPYING3.  If not see
 #include "predict.h"
 #include "tree-pass.h"
 #include "opts.h"
+#include "langhooks.h"
+#include "rtl-iter.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "cfgloop.h"
+#include "fold-const.h"
+#include "gimple-iterator.h"
+#include "tree-vectorizer.h"
+#include "tree-ssa-loop-niter.h"
+#include "rtx-vector-builder.h"
 
 /* True if X is an UNSPEC wrapper around a SYMBOL_REF or LABEL_REF.  */
 #define UNSPEC_ADDRESS_P(X)					\
@@ -100,7 +110,7 @@ enum riscv_address_type {
 /* Information about a function's frame layout.  */
 struct GTY(())  riscv_frame_info {
   /* The size of the frame in bytes.  */
-  HOST_WIDE_INT total_size;
+  poly_int64 total_size;
 
   /* Bit X is set if the function saves or restores GPR X.  */
   unsigned int mask;
@@ -112,17 +122,20 @@ struct GTY(())  riscv_frame_info {
   unsigned save_libcall_adjustment;
 
   /* Offsets of fixed-point and floating-point save areas from frame bottom */
-  HOST_WIDE_INT gp_sp_offset;
-  HOST_WIDE_INT fp_sp_offset;
+  poly_int64 gp_sp_offset;
+  poly_int64 fp_sp_offset;
+  
+  /* constant offset of scalable frame. */
+  HOST_WIDE_INT constant_offset;
 
   /* Offset of virtual frame pointer from stack pointer/frame bottom */
-  HOST_WIDE_INT frame_pointer_offset;
+  poly_int64 frame_pointer_offset;
 
   /* Offset of hard frame pointer from stack pointer/frame bottom */
-  HOST_WIDE_INT hard_frame_pointer_offset;
+  poly_int64 hard_frame_pointer_offset;
 
   /* The offset of arg_pointer_rtx from the bottom of the frame.  */
-  HOST_WIDE_INT arg_pointer_offset;
+  poly_int64 arg_pointer_offset;
 };
 
 enum riscv_privilege_levels {
@@ -255,6 +268,9 @@ static const struct riscv_tune_param *tune_param;
 /* Which automaton to use for tuning.  */
 enum riscv_microarchitecture_type riscv_microarchitecture;
 
+/* The number of 64-bit elements in an RVV vector.  */
+poly_uint16 riscv_vector_chunks;
+
 /* Index R is the smallest register class that contains register R.  */
 const enum reg_class riscv_regno_to_class[FIRST_PSEUDO_REGISTER] = {
   GR_REGS,	GR_REGS,	GR_REGS,	GR_REGS,
@@ -273,7 +289,22 @@ const enum reg_class riscv_regno_to_class[FIRST_PSEUDO_REGISTER] = {
   FP_REGS,	FP_REGS,	FP_REGS,	FP_REGS,
   FP_REGS,	FP_REGS,	FP_REGS,	FP_REGS,
   FP_REGS,	FP_REGS,	FP_REGS,	FP_REGS,
-  FRAME_REGS,	FRAME_REGS,
+  FRAME_REGS,	FRAME_REGS, VL_REGS,	VTYPE_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  NO_REGS,	NO_REGS,	NO_REGS,	NO_REGS,
+  VM_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
+  VD_REGS,	VD_REGS,	VD_REGS,	VD_REGS,
 };
 
 /* Costs to use when optimizing for rocket.  */
@@ -713,6 +744,16 @@ static int riscv_symbol_insns (enum riscv_symbol_type type)
 static bool
 riscv_legitimate_constant_p (machine_mode mode ATTRIBUTE_UNUSED, rtx x)
 {
+	/* If an offset is being added to something else, we need to allow the
+     base to be moved into the destination register, meaning that there
+     are no free temporaries for the offset.  */
+  poly_int64 offset;
+  if (CONST_POLY_INT_P (x)
+    && poly_int_rtx_p (x, &offset)
+    && !offset.is_constant ()
+    && rvv_offset_temporaries (true, offset) > 0)
+    return false;
+		
   return riscv_const_insns (x) > 0;
 }
 
@@ -723,7 +764,13 @@ riscv_cannot_force_const_mem (machine_mode mode ATTRIBUTE_UNUSED, rtx x)
 {
   enum riscv_symbol_type type;
   rtx base, offset;
-
+  
+	/* There's no way to calculate VL-based values using relocations.  */
+  subrtx_iterator::array_type array;
+  FOR_EACH_SUBRTX (iter, array, x, ALL)
+    if (GET_CODE (*iter) == CONST_POLY_INT)
+      return true;
+			
   /* There is no assembler syntax for expressing an address-sized
      high part.  */
   if (GET_CODE (x) == HIGH)
@@ -798,8 +845,8 @@ riscv_valid_offset_p (rtx x, machine_mode mode)
 
   /* We may need to split multiword moves, so make sure that every word
      is accessible.  */
-  if (GET_MODE_SIZE (mode) > UNITS_PER_WORD
-      && !SMALL_OPERAND (INTVAL (x) + GET_MODE_SIZE (mode) - UNITS_PER_WORD))
+  if (GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD
+      && !SMALL_OPERAND (INTVAL (x) + GET_MODE_SIZE (mode).to_constant () - UNITS_PER_WORD))
     return false;
 
   return true;
@@ -863,7 +910,7 @@ riscv_valid_lo_sum_p (enum riscv_symbol_type sym_type, machine_mode mode,
   else
     {
       align = GET_MODE_ALIGNMENT (mode);
-      size = GET_MODE_BITSIZE (mode);
+      size = GET_MODE_BITSIZE (mode).to_constant ();
     }
 
   /* We may need to split multiword moves, so make sure that each word
@@ -893,6 +940,9 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
       return riscv_valid_base_register_p (info->reg, mode, strict_p);
 
     case PLUS:
+      /* RVV load/store disallow any offset.  */
+      if (rvv_mode_p (mode))
+        return false;
       info->type = ADDRESS_REG;
       info->reg = XEXP (x, 0);
       info->offset = XEXP (x, 1);
@@ -900,6 +950,9 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 	      && riscv_valid_offset_p (info->offset, mode));
 
     case LO_SUM:
+      /* RVV load/store disallow LO_SUM.  */
+      if (rvv_mode_p (mode))
+        return false;
       info->type = ADDRESS_LO_SUM;
       info->reg = XEXP (x, 0);
       info->offset = XEXP (x, 1);
@@ -918,6 +971,9 @@ riscv_classify_address (struct riscv_address_info *info, rtx x,
 	      && riscv_valid_lo_sum_p (info->symbol_type, mode, info->offset));
 
     case CONST_INT:
+      /* RVV load/store disallow CONST_INT.  */
+      if (rvv_mode_p (mode))
+        return false;
       /* Small-integer addresses don't occur very often, but they
 	 are legitimate if x0 is a valid base register.  */
       info->type = ADDRESS_CONST_INT;
@@ -1003,8 +1059,8 @@ riscv_address_insns (rtx x, machine_mode mode, bool might_split_p)
 
   /* BLKmode is used for single unaligned loads and stores and should
      not count as a multiword mode. */
-  if (mode != BLKmode && might_split_p)
-    n += (GET_MODE_SIZE (mode) + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
+  if (mode != BLKmode && might_split_p && !rvv_mode_p (mode))
+    n += (GET_MODE_SIZE (mode).to_constant () + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
 
   if (addr.type == ADDRESS_LO_SUM)
     n += riscv_symbol_insns (addr.symbol_type) - 1;
@@ -1061,7 +1117,13 @@ riscv_const_insns (rtx x)
     case SYMBOL_REF:
     case LABEL_REF:
       return riscv_symbol_insns (riscv_classify_symbol (x));
-
+      
+    /* TDO: In RVV, we get CONST_POLY_INT by using csrr vlenb
+       instruction and several scalar shift or mult instructions,
+       it is so far unknown. We set it to 4 temporarily.  */
+    case CONST_POLY_INT:
+      return 4;
+      
     default:
       return 0;
     }
@@ -1097,9 +1159,9 @@ riscv_load_store_insns (rtx mem, rtx_insn *insn)
 
   /* Try to prove that INSN does not need to be split.  */
   might_split_p = true;
-  if (GET_MODE_BITSIZE (mode) <= 32)
+  if (GET_MODE_BITSIZE (mode).to_constant () <= 32)
     might_split_p = false;
-  else if (GET_MODE_BITSIZE (mode) == 64)
+  else if (GET_MODE_BITSIZE (mode).to_constant () == 64)
     {
       set = single_set (insn);
       if (set && !riscv_split_64bit_move_p (SET_DEST (set), SET_SRC (set)))
@@ -1616,7 +1678,7 @@ riscv_legitimize_move (machine_mode mode, rtx dest, rtx src)
        (set (reg:QI target) (subreg:QI (reg:DI temp) 0))
      with auto-sign/zero extend.  */
   if (GET_MODE_CLASS (mode) == MODE_INT
-      && GET_MODE_SIZE (mode) < UNITS_PER_WORD
+      && GET_MODE_SIZE (mode).to_constant () < UNITS_PER_WORD
       && can_create_pseudo_p ()
       && MEM_P (src))
     {
@@ -1641,7 +1703,7 @@ riscv_legitimize_move (machine_mode mode, rtx dest, rtx src)
 	     improve cse.  */
 	  machine_mode promoted_mode = mode;
 	  if (GET_MODE_CLASS (mode) == MODE_INT
-	      && GET_MODE_SIZE (mode) < UNITS_PER_WORD)
+	      && GET_MODE_SIZE (mode).to_constant () < UNITS_PER_WORD)
 	    promoted_mode = word_mode;
 
 	  if (splittable_const_int_operand (src, mode))
@@ -1739,7 +1801,8 @@ riscv_immediate_operand_p (int code, HOST_WIDE_INT x)
 static int
 riscv_binary_cost (rtx x, int single_insns, int double_insns)
 {
-  if (GET_MODE_SIZE (GET_MODE (x)) == UNITS_PER_WORD * 2)
+  if (!rvv_mode_p (GET_MODE (x))
+    && GET_MODE_SIZE (GET_MODE (x)).to_constant () == UNITS_PER_WORD * 2)
     return COSTS_N_INSNS (double_insns);
   return COSTS_N_INSNS (single_insns);
 }
@@ -1786,6 +1849,14 @@ static bool
 riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UNUSED,
 		 int *total, bool speed)
 {
+  /* TODO:We set RVV instruction cost as 1 by default. 
+     Cost Model need to be well analyzed and supported in the future. */
+  if (rvv_mode_p (mode))
+    {
+      *total = COSTS_N_INSNS (1);
+      return true;      
+    }
+    
   bool float_mode_p = FLOAT_MODE_P (mode);
   int cost;
 
@@ -1845,7 +1916,7 @@ riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UN
       return false;
 
     case NOT:
-      *total = COSTS_N_INSNS (GET_MODE_SIZE (mode) > UNITS_PER_WORD ? 2 : 1);
+      *total = COSTS_N_INSNS (GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD ? 2 : 1);
       return false;
 
     case AND:
@@ -2092,7 +2163,7 @@ riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UN
       if (float_mode_p)
 	*total = tune_param->fp_add[mode == DFmode];
       else
-	*total = COSTS_N_INSNS (GET_MODE_SIZE (mode) > UNITS_PER_WORD ? 4 : 1);
+	*total = COSTS_N_INSNS (GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD ? 4 : 1);
       return false;
 
     case MULT:
@@ -2101,7 +2172,7 @@ riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UN
       else if (!TARGET_MUL)
 	/* Estimate the cost of a library call.  */
 	*total = COSTS_N_INSNS (speed ? 32 : 6);
-      else if (GET_MODE_SIZE (mode) > UNITS_PER_WORD)
+      else if (GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD)
 	*total = 3 * tune_param->int_mul[0] + COSTS_N_INSNS (2);
       else if (!speed)
 	*total = COSTS_N_INSNS (1);
@@ -2264,7 +2335,7 @@ riscv_output_move (rtx dest, rtx src)
   dest_code = GET_CODE (dest);
   src_code = GET_CODE (src);
   mode = GET_MODE (dest);
-  dbl_p = (GET_MODE_SIZE (mode) == 8);
+  dbl_p = (GET_MODE_SIZE (mode).to_constant () == 8);
 
   if (dbl_p && riscv_split_64bit_move_p (dest, src))
     return "#";
@@ -2275,7 +2346,7 @@ riscv_output_move (rtx dest, rtx src)
 	return dbl_p ? "fmv.x.d\t%0,%1" : "fmv.x.w\t%0,%1";
 
       if (src_code == MEM)
-	switch (GET_MODE_SIZE (mode))
+	switch (GET_MODE_SIZE (mode).to_constant ())
 	  {
 	  case 1: return "lbu\t%0,%1";
 	  case 2: return "lhu\t%0,%1";
@@ -2328,7 +2399,7 @@ riscv_output_move (rtx dest, rtx src)
 	    }
 	}
       if (dest_code == MEM)
-	switch (GET_MODE_SIZE (mode))
+	switch (GET_MODE_SIZE (mode).to_constant ())
 	  {
 	  case 1: return "sb\t%z1,%0";
 	  case 2: return "sh\t%z1,%0";
@@ -2349,6 +2420,17 @@ riscv_output_move (rtx dest, rtx src)
       if (src_code == MEM)
 	return dbl_p ? "fld\t%0,%1" : "flw\t%0,%1";
     }
+  if (dest_code == REG
+	    && GP_REG_P (REGNO (dest))
+	    && src_code == CONST_POLY_INT)
+    {
+	    /* we only want a single LMUL = 1 RVV vector register vlenb
+	       read after reload. */
+	    poly_int64 value = rtx_to_poly_int64 (src);
+	    gcc_assert (value.coeffs[0] == UNITS_PER_V_REG.coeffs[0]
+	                && value.coeffs[1] == UNITS_PER_V_REG.coeffs[1]);
+	    return "csrr\t%0,vlenb";
+    }
   gcc_unreachable ();
 }
 
@@ -2495,7 +2577,7 @@ static void
 riscv_extend_comparands (rtx_code code, rtx *op0, rtx *op1)
 {
   /* Comparisons consider all XLEN bits, so extend sub-XLEN values.  */
-  if (GET_MODE_SIZE (word_mode) > GET_MODE_SIZE (GET_MODE (*op0)))
+  if (GET_MODE_SIZE (word_mode) > GET_MODE_SIZE (GET_MODE (*op0)).to_constant ())
     {
       /* It is more profitable to zero-extend QImode values.  But not if the
 	 first operand has already been sign-extended, and the second one is
@@ -2854,7 +2936,7 @@ riscv_flatten_aggregate_field (const_tree type,
 	if (n != 0)
 	  return -1;
 
-	HOST_WIDE_INT elt_size = GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (type)));
+	HOST_WIDE_INT elt_size = GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (type))).to_constant ();
 
 	if (elt_size <= UNITS_PER_FP_ARG)
 	  {
@@ -2872,9 +2954,9 @@ riscv_flatten_aggregate_field (const_tree type,
     default:
       if (n < 2
 	  && ((SCALAR_FLOAT_TYPE_P (type)
-	       && GET_MODE_SIZE (TYPE_MODE (type)) <= UNITS_PER_FP_ARG)
+	       && GET_MODE_SIZE (TYPE_MODE (type)).to_constant () <= UNITS_PER_FP_ARG)
 	      || (INTEGRAL_TYPE_P (type)
-		  && GET_MODE_SIZE (TYPE_MODE (type)) <= UNITS_PER_WORD)))
+		  && GET_MODE_SIZE (TYPE_MODE (type)).to_constant () <= UNITS_PER_WORD)))
 	{
 	  fields[n].type = type;
 	  fields[n].offset = offset;
@@ -3110,7 +3192,7 @@ riscv_get_arg_info (struct riscv_arg_info *info, const CUMULATIVE_ARGS *cum,
     }
 
   /* Work out the size of the argument.  */
-  num_bytes = type ? int_size_in_bytes (type) : GET_MODE_SIZE (mode);
+  num_bytes = type ? int_size_in_bytes (type) : GET_MODE_SIZE (mode).to_constant ();
   num_words = (num_bytes + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
 
   /* Doubleword-aligned varargs start on an even register boundary.  */
@@ -3204,7 +3286,7 @@ riscv_function_value (const_tree type, const_tree func, machine_mode mode)
 static bool
 riscv_pass_by_reference (cumulative_args_t cum_v, const function_arg_info &arg)
 {
-  HOST_WIDE_INT size = arg.type_size_in_bytes ();
+  HOST_WIDE_INT size = arg.type_size_in_bytes ().to_constant ();
   struct riscv_arg_info info;
   CUMULATIVE_ARGS *cum = get_cumulative_args (cum_v);
 
@@ -3833,7 +3915,7 @@ riscv_elf_select_rtx_section (machine_mode mode, rtx x,
 {
   section *s = default_elf_select_rtx_section (mode, x, align);
 
-  if (riscv_size_ok_for_small_data_p (GET_MODE_SIZE (mode)))
+  if (riscv_size_ok_for_small_data_p (GET_MODE_SIZE (mode).to_constant ()))
     {
       if (startswith (s->named.name, ".rodata.cst"))
 	{
@@ -3941,6 +4023,19 @@ riscv_save_libcall_count (unsigned mask)
   abort ();
 }
 
+/* Handle stack align for poly_int.  */
+static poly_int64
+riscv_stack_align (poly_int64 value)
+{
+  return aligned_upper_bound (value, PREFERRED_STACK_BOUNDARY / 8);
+}
+
+static HOST_WIDE_INT
+riscv_stack_align (HOST_WIDE_INT value)
+{
+  return RISCV_STACK_ALIGN (value);
+}
+
 /* Populate the current function's riscv_frame_info structure.
 
    RISC-V stack frames grown downward.  High addresses are at the top.
@@ -3989,7 +4084,7 @@ static void
 riscv_compute_frame_info (void)
 {
   struct riscv_frame_info *frame;
-  HOST_WIDE_INT offset;
+  poly_int64 offset;
   bool interrupt_save_prologue_temp = false;
   unsigned int regno, i, num_x_saved = 0, num_f_saved = 0;
 
@@ -4000,7 +4095,7 @@ riscv_compute_frame_info (void)
   if (cfun->machine->interrupt_handler_p)
     {
       HOST_WIDE_INT step1 = riscv_first_stack_step (frame);
-      if (! SMALL_OPERAND (frame->total_size - step1))
+      if (! POLY_SMALL_OPERAND_P ((frame->total_size - step1)))
 	interrupt_save_prologue_temp = true;
     }
 
@@ -4030,23 +4125,23 @@ riscv_compute_frame_info (void)
     }
 
   /* At the bottom of the frame are any outgoing stack arguments. */
-  offset = RISCV_STACK_ALIGN (crtl->outgoing_args_size);
+  offset = riscv_stack_align (crtl->outgoing_args_size);
   /* Next are local stack variables. */
-  offset += RISCV_STACK_ALIGN (get_frame_size ());
+  offset += riscv_stack_align (get_frame_size ());
   /* The virtual frame pointer points above the local variables. */
   frame->frame_pointer_offset = offset;
   /* Next are the callee-saved FPRs. */
   if (frame->fmask)
-    offset += RISCV_STACK_ALIGN (num_f_saved * UNITS_PER_FP_REG);
+    offset += riscv_stack_align (num_f_saved * UNITS_PER_FP_REG);
   frame->fp_sp_offset = offset - UNITS_PER_FP_REG;
   /* Next are the callee-saved GPRs. */
   if (frame->mask)
     {
-      unsigned x_save_size = RISCV_STACK_ALIGN (num_x_saved * UNITS_PER_WORD);
+      unsigned x_save_size = riscv_stack_align (num_x_saved * UNITS_PER_WORD);
       unsigned num_save_restore = 1 + riscv_save_libcall_count (frame->mask);
 
       /* Only use save/restore routines if they don't alter the stack size.  */
-      if (RISCV_STACK_ALIGN (num_save_restore * UNITS_PER_WORD) == x_save_size)
+      if (riscv_stack_align (num_save_restore * UNITS_PER_WORD) == x_save_size)
 	{
 	  /* Libcall saves/restores 3 registers at once, so we need to
 	     allocate 12 bytes for callee-saved register.  */
@@ -4062,17 +4157,18 @@ riscv_compute_frame_info (void)
   /* The hard frame pointer points above the callee-saved GPRs. */
   frame->hard_frame_pointer_offset = offset;
   /* Above the hard frame pointer is the callee-allocated varags save area. */
-  offset += RISCV_STACK_ALIGN (cfun->machine->varargs_size);
+  offset += riscv_stack_align (cfun->machine->varargs_size);
   /* Next is the callee-allocated area for pretend stack arguments.  */
-  offset += RISCV_STACK_ALIGN (crtl->args.pretend_args_size);
+  offset += riscv_stack_align (crtl->args.pretend_args_size);
   /* Arg pointer must be below pretend args, but must be above alignment
      padding.  */
   frame->arg_pointer_offset = offset - crtl->args.pretend_args_size;
   frame->total_size = offset;
+                        
   /* Next points the incoming stack pointer and any incoming arguments. */
 
   /* Only use save/restore routines when the GPRs are atop the frame.  */
-  if (frame->hard_frame_pointer_offset != frame->total_size)
+  if (known_ne (frame->hard_frame_pointer_offset, frame->total_size))
     frame->save_libcall_adjustment = 0;
 }
 
@@ -4089,10 +4185,10 @@ riscv_can_eliminate (const int from ATTRIBUTE_UNUSED, const int to)
    or argument pointer.  TO is either the stack pointer or hard frame
    pointer.  */
 
-HOST_WIDE_INT
+poly_int64
 riscv_initial_elimination_offset (int from, int to)
 {
-  HOST_WIDE_INT src, dest;
+  poly_int64 src, dest;
 
   riscv_compute_frame_info ();
 
@@ -4136,7 +4232,7 @@ riscv_set_return_address (rtx address, rtx scratch)
 
   gcc_assert (BITSET_P (cfun->machine->frame.mask, RETURN_ADDR_REGNUM));
   slot_address = riscv_add_offset (scratch, stack_pointer_rtx,
-				  cfun->machine->frame.gp_sp_offset);
+				  cfun->machine->frame.gp_sp_offset.to_constant());
   riscv_emit_move (gen_frame_mem (GET_MODE (address), slot_address), address);
 }
 
@@ -4163,13 +4259,13 @@ riscv_save_restore_reg (machine_mode mode, int regno,
    of the frame.  */
 
 static void
-riscv_for_each_saved_reg (HOST_WIDE_INT sp_offset, riscv_save_restore_fn fn,
+riscv_for_each_saved_reg (poly_int64 sp_offset, riscv_save_restore_fn fn,
 			  bool epilogue, bool maybe_eh_return)
 {
   HOST_WIDE_INT offset;
 
   /* Save the link register and s-registers. */
-  offset = cfun->machine->frame.gp_sp_offset - sp_offset;
+  offset = (cfun->machine->frame.gp_sp_offset - sp_offset).to_constant ();
   for (unsigned int regno = GP_REG_FIRST; regno <= GP_REG_LAST; regno++)
     if (BITSET_P (cfun->machine->frame.mask, regno - GP_REG_FIRST))
       {
@@ -4200,14 +4296,14 @@ riscv_for_each_saved_reg (HOST_WIDE_INT sp_offset, riscv_save_restore_fn fn,
 
   /* This loop must iterate over the same space as its companion in
      riscv_compute_frame_info.  */
-  offset = cfun->machine->frame.fp_sp_offset - sp_offset;
+  offset = (cfun->machine->frame.fp_sp_offset - sp_offset).to_constant ();
   for (unsigned int regno = FP_REG_FIRST; regno <= FP_REG_LAST; regno++)
     if (BITSET_P (cfun->machine->frame.fmask, regno - FP_REG_FIRST))
       {
 	machine_mode mode = TARGET_DOUBLE_FLOAT ? DFmode : SFmode;
 
 	riscv_save_restore_reg (mode, regno, offset, fn);
-	offset -= GET_MODE_SIZE (mode);
+	offset -= GET_MODE_SIZE (mode).to_constant ();
       }
 }
 
@@ -4249,21 +4345,21 @@ riscv_restore_reg (rtx reg, rtx mem)
 static HOST_WIDE_INT
 riscv_first_stack_step (struct riscv_frame_info *frame)
 {
-  if (SMALL_OPERAND (frame->total_size))
-    return frame->total_size;
+  if (SMALL_OPERAND (frame->total_size.to_constant()))
+    return frame->total_size.to_constant ();
 
   HOST_WIDE_INT min_first_step =
-    RISCV_STACK_ALIGN (frame->total_size - frame->fp_sp_offset);
+    RISCV_STACK_ALIGN ((frame->total_size - frame->fp_sp_offset).to_constant());
   HOST_WIDE_INT max_first_step = IMM_REACH / 2 - PREFERRED_STACK_BOUNDARY / 8;
-  HOST_WIDE_INT min_second_step = frame->total_size - max_first_step;
+  HOST_WIDE_INT min_second_step = frame->total_size.to_constant() - max_first_step;
   gcc_assert (min_first_step <= max_first_step);
 
   /* As an optimization, use the least-significant bits of the total frame
      size, so that the second adjustment step is just LUI + ADD.  */
   if (!SMALL_OPERAND (min_second_step)
-      && frame->total_size % IMM_REACH < IMM_REACH / 2
-      && frame->total_size % IMM_REACH >= min_first_step)
-    return frame->total_size % IMM_REACH;
+      && frame->total_size.to_constant() % IMM_REACH < IMM_REACH / 2
+      && frame->total_size.to_constant() % IMM_REACH >= min_first_step)
+    return frame->total_size.to_constant() % IMM_REACH;
 
   if (TARGET_RVC)
     {
@@ -4336,7 +4432,7 @@ void
 riscv_expand_prologue (void)
 {
   struct riscv_frame_info *frame = &cfun->machine->frame;
-  HOST_WIDE_INT size = frame->total_size;
+  HOST_WIDE_INT size = frame->total_size.to_constant ();
   unsigned mask = frame->mask;
   rtx insn;
 
@@ -4379,7 +4475,7 @@ riscv_expand_prologue (void)
   if (frame_pointer_needed)
     {
       insn = gen_add3_insn (hard_frame_pointer_rtx, stack_pointer_rtx,
-			    GEN_INT (frame->hard_frame_pointer_offset - size));
+			    GEN_INT ((frame->hard_frame_pointer_offset - size).to_constant ()));
       RTX_FRAME_RELATED_P (emit_insn (insn)) = 1;
 
       riscv_emit_stack_tie ();
@@ -4445,7 +4541,7 @@ riscv_expand_epilogue (int style)
      Start off by assuming that no registers need to be restored.  */
   struct riscv_frame_info *frame = &cfun->machine->frame;
   unsigned mask = frame->mask;
-  HOST_WIDE_INT step1 = frame->total_size;
+  HOST_WIDE_INT step1 = frame->total_size.to_constant ();
   HOST_WIDE_INT step2 = 0;
   bool use_restore_libcall = ((style == NORMAL_RETURN)
 			      && riscv_use_save_libcall (frame));
@@ -4453,8 +4549,8 @@ riscv_expand_epilogue (int style)
   rtx insn;
 
   /* We need to add memory barrier to prevent read from deallocated stack.  */
-  bool need_barrier_p = (get_frame_size ()
-			 + cfun->machine->frame.arg_pointer_offset) != 0;
+  bool need_barrier_p = known_ne (get_frame_size (),
+			 cfun->machine->frame.arg_pointer_offset);
 
   if (cfun->machine->naked_p)
     {
@@ -4481,7 +4577,7 @@ riscv_expand_epilogue (int style)
       riscv_emit_stack_tie ();
       need_barrier_p = false;
 
-      rtx adjust = GEN_INT (-frame->hard_frame_pointer_offset);
+      rtx adjust = GEN_INT (-frame->hard_frame_pointer_offset.to_constant ());
       if (!SMALL_OPERAND (INTVAL (adjust)))
 	{
 	  riscv_emit_move (RISCV_PROLOGUE_TEMP (Pmode), adjust);
@@ -4495,7 +4591,7 @@ riscv_expand_epilogue (int style)
       rtx dwarf = NULL_RTX;
       rtx cfa_adjust_value = gen_rtx_PLUS (
 			       Pmode, hard_frame_pointer_rtx,
-			       GEN_INT (-frame->hard_frame_pointer_offset));
+			       GEN_INT (-frame->hard_frame_pointer_offset.to_constant ()));
       rtx cfa_adjust_rtx = gen_rtx_SET (stack_pointer_rtx, cfa_adjust_value);
       dwarf = alloc_reg_note (REG_CFA_ADJUST_CFA, cfa_adjust_rtx, dwarf);
       RTX_FRAME_RELATED_P (insn) = 1;
@@ -4512,7 +4608,7 @@ riscv_expand_epilogue (int style)
     }
 
   /* Set TARGET to BASE + STEP1.  */
-  if (step1 > 0)
+  if (known_gt (step1, 0))
     {
       /* Emit a barrier to prevent loads from a deallocated stack.  */
       riscv_emit_stack_tie ();
@@ -4638,7 +4734,7 @@ riscv_epilogue_uses (unsigned int regno)
 bool
 riscv_can_use_return_insn (void)
 {
-  return (reload_completed && cfun->machine->frame.total_size == 0
+  return (reload_completed && known_eq (cfun->machine->frame.total_size, 0)
 	  && ! cfun->machine->interrupt_handler_p);
 }
 
@@ -4738,7 +4834,8 @@ static bool
 riscv_secondary_memory_needed (machine_mode mode, reg_class_t class1,
 			       reg_class_t class2)
 {
-  return (GET_MODE_SIZE (mode) > UNITS_PER_WORD
+  return !rvv_mode_p (mode)
+	  && (GET_MODE_SIZE (mode).to_constant () > UNITS_PER_WORD
 	  && (class1 == FP_REGS) != (class2 == FP_REGS));
 }
 
@@ -4760,11 +4857,33 @@ riscv_register_move_cost (machine_mode mode,
 static unsigned int
 riscv_hard_regno_nregs (unsigned int regno, machine_mode mode)
 {
+  if (rvv_mode_p (mode))
+    {
+			/* TODO: Tuple mode register manipulation will be supported
+			   for segment instructions in the future. */
+				
+			/* Handle fractional LMUL, it only occupy part of vector register but still
+         need one vector register to hold.  */
+      if (maybe_lt (GET_MODE_SIZE (mode), UNITS_PER_V_REG))
+	return 1;
+
+      return exact_div (GET_MODE_SIZE (mode), UNITS_PER_V_REG).to_constant ();
+    }
+  
+  /* mode for VL or VTYPE are just a marker, not holding value,
+     so it always consume one register.  */
+  if (regno == VTYPE_REGNUM || regno == VL_REGNUM)
+    return 1;
+  
+	/* Assume every valid non-vector mode fits in one vector register.  */
+  if (V_REG_P (regno))
+    return 1;
+		
   if (FP_REG_P (regno))
-    return (GET_MODE_SIZE (mode) + UNITS_PER_FP_REG - 1) / UNITS_PER_FP_REG;
+    return (GET_MODE_SIZE (mode).to_constant () + UNITS_PER_FP_REG - 1) / UNITS_PER_FP_REG;
 
   /* All other registers are word-sized.  */
-  return (GET_MODE_SIZE (mode) + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
+  return (GET_MODE_SIZE (mode).to_constant () + UNITS_PER_WORD - 1) / UNITS_PER_WORD;
 }
 
 /* Implement TARGET_HARD_REGNO_MODE_OK.  */
@@ -4776,11 +4895,17 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
 
   if (GP_REG_P (regno))
     {
+      if (rvv_mode_p (mode))
+	return false;
+	
       if (!GP_REG_P (regno + nregs - 1))
 	return false;
     }
   else if (FP_REG_P (regno))
     {
+      if (rvv_mode_p (mode))
+	return false;
+	
       if (!FP_REG_P (regno + nregs - 1))
 	return false;
 
@@ -4795,6 +4920,18 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
 	      && GET_MODE_UNIT_SIZE (mode) > UNITS_PER_FP_ARG))
 	return false;
     }
+	else if (V_REG_P (regno))
+    {
+      if (!rvv_mode_p (mode))
+	return false;
+
+      /* 3.3.2. LMUL = 2,4,8, register numbers should be multiple of 2,4,8.
+         but for mask vector register, register numbers can be any number. */
+      int regsize = rvv_regsize (mode);
+
+      if (regsize != 1)
+	return ((regno % regsize) == 0);		
+    }			
   else
     return false;
 
@@ -4971,6 +5108,26 @@ riscv_init_machine_status (void)
   return ggc_cleared_alloc<machine_function> ();
 }
 
+/* Return the Vlen value associated with -mriscv-vector-bits= value VALUE.  */
+
+static poly_uint16
+riscv_convert_riscv_vector_bits (riscv_vector_bits_enum value)
+{
+  /* 64-bit RVV modes use different register layouts
+     on small-endian targets, so we would need to forbid subregs that convert
+     from one to the other.  By default a reinterpret sequence would then
+     involve a store to memory in one mode and a load back in the other.
+     Even if we optimize that sequence using reverse instructions,
+     it would still be a significant potential overhead.
+
+     For now, it seems better to generate length-agnostic code for that
+     case instead.  */
+  if (value == RVV_SCALABLE)
+    return poly_uint16 (2, 2);
+  else
+    return (int) value / 64;
+}
+
 /* Implement TARGET_OPTION_OVERRIDE.  */
 
 static void
@@ -5116,7 +5273,9 @@ riscv_option_override (void)
 
       riscv_stack_protector_guard_offset = offs;
     }
-
+  
+  /* Convert -mriscv-vector-bits to a chunks count.  */
+  riscv_vector_chunks = riscv_convert_riscv_vector_bits (riscv_vector_bits);
 }
 
 /* Implement TARGET_CONDITIONAL_REGISTER_USAGE.  */
@@ -5474,9 +5633,10 @@ riscv_promote_function_mode (const_tree type ATTRIBUTE_UNUSED,
     return promote_mode (type, mode, punsignedp);
 
   unsignedp = *punsignedp;
-  PROMOTE_MODE (mode, unsignedp, type);
+  scalar_mode smode = as_a <scalar_mode> (mode);
+  PROMOTE_MODE (smode, unsignedp, type);
   *punsignedp = unsignedp;
-  return mode;
+  return smode;
 }
 
 /* Implement TARGET_MACHINE_DEPENDENT_REORG.  */
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 5083a1c24b0..8f56a5a4746 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -123,7 +123,7 @@ ASM_MISA_SPEC
 
 /* The mapping from gcc register number to DWARF 2 CFA column number.  */
 #define DWARF_FRAME_REGNUM(REGNO) \
-  (GP_REG_P (REGNO) || FP_REG_P (REGNO) ? REGNO : INVALID_REGNUM)
+  (GP_REG_P (REGNO) || FP_REG_P (REGNO) || (TARGET_VECTOR && V_REG_P (REGNO)) ? REGNO : INVALID_REGNUM)
 
 /* The DWARF 2 CFA column which tracks the return address.  */
 #define DWARF_FRAME_RETURN_COLUMN RETURN_ADDR_REGNUM
@@ -155,6 +155,7 @@ ASM_MISA_SPEC
 
 /* The `Q' extension is not yet supported.  */
 #define UNITS_PER_FP_REG (TARGET_DOUBLE_FLOAT ? 8 : 4)
+#define UNITS_PER_V_REG (GET_MODE_SIZE (VNx2DImode))
 
 /* The largest type that can be passed in floating-point registers.  */
 #define UNITS_PER_FP_ARG						\
@@ -289,9 +290,13 @@ ASM_MISA_SPEC
    - 32 floating point registers
    - 2 fake registers:
 	- ARG_POINTER_REGNUM
-	- FRAME_POINTER_REGNUM */
+	- FRAME_POINTER_REGNUM 
+   - 1 vl register 
+   - 1 vtype register
+   - 30 unused registers for future expansion
+   - 32 vector registers */
 
-#define FIRST_PSEUDO_REGISTER 66
+#define FIRST_PSEUDO_REGISTER 128
 
 /* x0, sp, gp, and tp are fixed.  */
 
@@ -303,7 +308,11 @@ ASM_MISA_SPEC
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,			\
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,			\
   /* Others.  */							\
-  1, 1									\
+  1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1,			\
+  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,			\
+  /* Vector registers.  */							\
+  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,			\
+  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0			\
 }
 
 /* a0-a7, t0-t6, fa0-fa7, and ft0-ft11 are volatile across calls.
@@ -317,7 +326,11 @@ ASM_MISA_SPEC
   1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1,			\
   1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1,			\
   /* Others.  */							\
-  1, 1									\
+  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,			\
+  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,			\
+  /* Vector registers.  */							\
+  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,			\
+  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1			\
 }
 
 /* Select a register mode required for caller save of hard regno REGNO.
@@ -337,6 +350,10 @@ ASM_MISA_SPEC
 #define FP_REG_LAST  63
 #define FP_REG_NUM   (FP_REG_LAST - FP_REG_FIRST + 1)
 
+#define V_REG_FIRST 96
+#define V_REG_LAST  127
+#define V_REG_NUM   (V_REG_LAST - V_REG_FIRST + 1)
+
 /* The DWARF 2 CFA column which tracks the return address from a
    signal handler context.  This means that to maintain backwards
    compatibility, no hard register can be assigned this column if it
@@ -347,6 +364,8 @@ ASM_MISA_SPEC
   ((unsigned int) ((int) (REGNO) - GP_REG_FIRST) < GP_REG_NUM)
 #define FP_REG_P(REGNO)  \
   ((unsigned int) ((int) (REGNO) - FP_REG_FIRST) < FP_REG_NUM)
+#define V_REG_P(REGNO)  \
+  ((unsigned int) ((int) (REGNO) - V_REG_FIRST) < V_REG_NUM)
 
 /* True when REGNO is in SIBCALL_REGS set.  */
 #define SIBCALL_REG_P(REGNO)	\
@@ -430,6 +449,11 @@ enum reg_class
   GR_REGS,			/* integer registers */
   FP_REGS,			/* floating-point registers */
   FRAME_REGS,			/* arg pointer and frame pointer */
+  VL_REGS,			/* vl register */
+  VTYPE_REGS,			/* vype register */
+  VM_REGS,			/* v0.t registers */
+  VD_REGS,			/* vector registers except v0.t */
+  V_REGS,			/* vector registers */
   ALL_REGS,			/* all registers */
   LIM_REG_CLASSES		/* max value + 1 */
 };
@@ -450,6 +474,11 @@ enum reg_class
   "GR_REGS",								\
   "FP_REGS",								\
   "FRAME_REGS",								\
+  "VL_REGS",								\
+  "VTYPE_REGS",								\
+  "VM_REGS",								\
+  "VD_REGS",								\
+  "V_REGS",								\
   "ALL_REGS"								\
 }
 
@@ -466,13 +495,18 @@ enum reg_class
 
 #define REG_CLASS_CONTENTS						\
 {									\
-  { 0x00000000, 0x00000000, 0x00000000 },	/* NO_REGS */		\
-  { 0xf003fcc0, 0x00000000, 0x00000000 },	/* SIBCALL_REGS */	\
-  { 0xffffffc0, 0x00000000, 0x00000000 },	/* JALR_REGS */		\
-  { 0xffffffff, 0x00000000, 0x00000000 },	/* GR_REGS */		\
-  { 0x00000000, 0xffffffff, 0x00000000 },	/* FP_REGS */		\
-  { 0x00000000, 0x00000000, 0x00000003 },	/* FRAME_REGS */	\
-  { 0xffffffff, 0xffffffff, 0x00000003 }	/* ALL_REGS */		\
+  { 0x00000000, 0x00000000, 0x00000000, 0x00000000 },	/* NO_REGS */		\
+  { 0xf003fcc0, 0x00000000, 0x00000000, 0x00000000 },	/* SIBCALL_REGS */	\
+  { 0xffffffc0, 0x00000000, 0x00000000, 0x00000000 },	/* JALR_REGS */		\
+  { 0xffffffff, 0x00000000, 0x00000000, 0x00000000 },	/* GR_REGS */		\
+  { 0x00000000, 0xffffffff, 0x00000000, 0x00000000 },	/* FP_REGS */		\
+  { 0x00000000, 0x00000000, 0x00000003, 0x00000000 },	/* FRAME_REGS */	\
+  { 0x00000000, 0x00000000, 0x00000004, 0x00000000 },	/* VL_REGS */		\
+  { 0x00000000, 0x00000000, 0x00000008, 0x00000000 },	/* VTYPE_REGS */	\
+  { 0x00000000, 0x00000000, 0x00000000, 0x00000001 },	/* V0_REGS */		\
+  { 0x00000000, 0x00000000, 0x00000000, 0xfffffffe },	/* VNoV0_REGS */	\
+  { 0x00000000, 0x00000000, 0x00000000, 0xffffffff },	/* V_REGS */		\
+  { 0xffffffff, 0xffffffff, 0x0000000f, 0xffffffff }	/* ALL_REGS */		\
 }
 
 /* A C expression whose value is a register class containing hard
@@ -512,9 +546,16 @@ enum reg_class
   60, 61, 62, 63,							\
   /* Call-saved FPRs.  */						\
   40, 41, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,			\
+  /* V24 ~ V31.  */							\
+  120, 121, 122, 123, 124, 125, 126, 127,				\
+  /* V8 ~ V23.  */							\
+  104, 105, 106, 107, 108, 109, 110, 111,				\
+  112, 113, 114, 115, 116, 117, 118, 119,				\
+  /* V0 ~ V7.  */							\
+  96, 97, 98, 99, 100, 101, 102, 103,					\
   /* None of the remaining classes have defined call-saved		\
      registers.  */							\
-  64, 65								\
+  64, 65, 66, 67							\
 }
 
 /* True if VALUE is a signed 12-bit number.  */
@@ -522,6 +563,10 @@ enum reg_class
 #define SMALL_OPERAND(VALUE) \
   ((unsigned HOST_WIDE_INT) (VALUE) + IMM_REACH/2 < IMM_REACH)
 
+#define POLY_SMALL_OPERAND_P(POLY_VALUE)		\
+  (POLY_VALUE.is_constant () ?				\
+     SMALL_OPERAND (POLY_VALUE.to_constant ()) : false)
+
 /* True if VALUE can be loaded into a register using LUI.  */
 
 #define LUI_OPERAND(VALUE)						\
@@ -780,7 +825,14 @@ typedef struct {
   "fs0", "fs1", "fa0", "fa1", "fa2", "fa3", "fa4", "fa5",	\
   "fa6", "fa7", "fs2", "fs3", "fs4", "fs5", "fs6", "fs7",	\
   "fs8", "fs9", "fs10","fs11","ft8", "ft9", "ft10","ft11",	\
-  "arg", "frame", }
+  "arg", "frame", "vl", "vtype", "N/A", "N/A", "N/A", "N/A",	\
+  "N/A", "N/A", "N/A", "N/A", "N/A", "N/A", "N/A", "N/A",	\
+  "N/A", "N/A", "N/A", "N/A", "N/A", "N/A", "N/A", "N/A",	\
+  "N/A", "N/A", "N/A", "N/A", "N/A", "N/A", "N/A", "N/A",	\
+  "v0",  "v1",  "v2",  "v3",  "v4",  "v5",  "v6",  "v7",	\
+  "v8",  "v9",  "v10", "v11", "v12", "v13", "v14", "v15",	\
+  "v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23",	\
+  "v24", "v25", "v26", "v27", "v28", "v29", "v30", "v31", }
 
 #define ADDITIONAL_REGISTER_NAMES					\
 {									\
@@ -955,6 +1007,10 @@ while (0)
 extern const enum reg_class riscv_regno_to_class[];
 extern bool riscv_slow_unaligned_access_p;
 extern unsigned riscv_stack_boundary;
+extern poly_uint16 riscv_vector_chunks;
+/* The number of bits and bytes in a RVV vector.  */
+#define BITS_PER_RISCV_VECTOR (poly_uint16 (riscv_vector_chunks * 64))
+#define BYTES_PER_RISCV_VECTOR (poly_uint16 (riscv_vector_chunks * 8))
 #endif
 
 #define ASM_PREFERRED_EH_DATA_FORMAT(CODE,GLOBAL) \
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index b8ab0cf169a..8e880ba8599 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -102,6 +102,10 @@
    (NORMAL_RETURN		0)
    (SIBCALL_RETURN		1)
    (EXCEPTION_RETURN		2)
+   
+   ;; Constant helper for RVV 
+   (VL_REGNUM			66)
+   (VTYPE_REGNUM		67)
 ])
 
 (include "predicates.md")
@@ -1619,23 +1623,23 @@
 })
 
 (define_insn "*movdi_32bit"
-  [(set (match_operand:DI 0 "nonimmediate_operand" "=r,r,r,m,  *f,*f,*r,*f,*m")
-	(match_operand:DI 1 "move_operand"         " r,i,m,r,*J*r,*m,*f,*f,*f"))]
+  [(set (match_operand:DI 0 "nonimmediate_operand" "=r,r,r,r,m,  *f,*f,*r,*f,*m")
+	(match_operand:DI 1 "move_operand"         " vp,r,i,m,r,*J*r,*m,*f,*f,*f"))]
   "!TARGET_64BIT
    && (register_operand (operands[0], DImode)
        || reg_or_0_operand (operands[1], DImode))"
   { return riscv_output_move (operands[0], operands[1]); }
-  [(set_attr "move_type" "move,const,load,store,mtc,fpload,mfc,fmove,fpstore")
+  [(set_attr "move_type" "const,move,const,load,store,mtc,fpload,mfc,fmove,fpstore")
    (set_attr "mode" "DI")])
 
 (define_insn "*movdi_64bit"
-  [(set (match_operand:DI 0 "nonimmediate_operand" "=r,r,r, m,  *f,*f,*r,*f,*m")
-	(match_operand:DI 1 "move_operand"         " r,T,m,rJ,*r*J,*m,*f,*f,*f"))]
+  [(set (match_operand:DI 0 "nonimmediate_operand" "=r,r,r,r, m,  *f,*f,*r,*f,*m")
+	(match_operand:DI 1 "move_operand"         " vp,r,T,m,rJ,*r*J,*m,*f,*f,*f"))]
   "TARGET_64BIT
    && (register_operand (operands[0], DImode)
        || reg_or_0_operand (operands[1], DImode))"
   { return riscv_output_move (operands[0], operands[1]); }
-  [(set_attr "move_type" "move,const,load,store,mtc,fpload,mfc,fmove,fpstore")
+  [(set_attr "move_type" "const,move,const,load,store,mtc,fpload,mfc,fmove,fpstore")
    (set_attr "mode" "DI")])
 
 ;; 32-bit Integer moves
@@ -1650,12 +1654,12 @@
 })
 
 (define_insn "*movsi_internal"
-  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r, m,  *f,*f,*r,*m")
-	(match_operand:SI 1 "move_operand"         " r,T,m,rJ,*r*J,*m,*f,*f"))]
+  [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,r, m,  *f,*f,*r,*m")
+	(match_operand:SI 1 "move_operand"         " vp,r,T,m,rJ,*r*J,*m,*f,*f"))]
   "(register_operand (operands[0], SImode)
     || reg_or_0_operand (operands[1], SImode))"
   { return riscv_output_move (operands[0], operands[1]); }
-  [(set_attr "move_type" "move,const,load,store,mtc,fpload,mfc,fpstore")
+  [(set_attr "move_type" "const,move,const,load,store,mtc,fpload,mfc,fpstore")
    (set_attr "mode" "SI")])
 
 ;; 16-bit Integer moves
@@ -1675,12 +1679,12 @@
 })
 
 (define_insn "*movhi_internal"
-  [(set (match_operand:HI 0 "nonimmediate_operand" "=r,r,r, m,  *f,*r")
-	(match_operand:HI 1 "move_operand"	   " r,T,m,rJ,*r*J,*f"))]
+  [(set (match_operand:HI 0 "nonimmediate_operand" "=r,r,r,r, m,  *f,*r")
+	(match_operand:HI 1 "move_operand"	   " vp,r,T,m,rJ,*r*J,*f"))]
   "(register_operand (operands[0], HImode)
     || reg_or_0_operand (operands[1], HImode))"
   { return riscv_output_move (operands[0], operands[1]); }
-  [(set_attr "move_type" "move,const,load,store,mtc,mfc")
+  [(set_attr "move_type" "const,move,const,load,store,mtc,mfc")
    (set_attr "mode" "HI")])
 
 ;; HImode constant generation; see riscv_move_integer for details.
@@ -1717,12 +1721,12 @@
 })
 
 (define_insn "*movqi_internal"
-  [(set (match_operand:QI 0 "nonimmediate_operand" "=r,r,r, m,  *f,*r")
-	(match_operand:QI 1 "move_operand"         " r,I,m,rJ,*r*J,*f"))]
+  [(set (match_operand:QI 0 "nonimmediate_operand" "=r,r,r,r, m,  *f,*r")
+	(match_operand:QI 1 "move_operand"         " vp,r,I,m,rJ,*r*J,*f"))]
   "(register_operand (operands[0], QImode)
     || reg_or_0_operand (operands[1], QImode))"
   { return riscv_output_move (operands[0], operands[1]); }
-  [(set_attr "move_type" "move,const,load,store,mtc,mfc")
+  [(set_attr "move_type" "const,move,const,load,store,mtc,mfc")
    (set_attr "mode" "QI")])
 
 ;; 32-bit floating point moves
@@ -2095,7 +2099,7 @@
        (lshiftrt:GPR (match_dup 3) (match_dup 2)))]
 {
   /* Op2 is a VOIDmode constant, so get the mode size from op1.  */
-  operands[2] = GEN_INT (GET_MODE_BITSIZE (GET_MODE (operands[1]))
+  operands[2] = GEN_INT (GET_MODE_BITSIZE (GET_MODE (operands[1])).to_constant ()
 			 - exact_log2 (INTVAL (operands[2]) + 1));
 })
 
diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt
index 9e9fe6d8ccd..42bdca569cd 100644
--- a/gcc/config/riscv/riscv.opt
+++ b/gcc/config/riscv/riscv.opt
@@ -70,6 +70,38 @@ Enum(abi_type) String(lp64f) Value(ABI_LP64F)
 EnumValue
 Enum(abi_type) String(lp64d) Value(ABI_LP64D)
 
+Enum
+Name(riscv_vector_bits) Type(enum riscv_vector_bits_enum)
+The possible RVV vector lengths:
+
+EnumValue
+Enum(riscv_vector_bits) String(scalable) Value(RVV_SCALABLE)
+
+EnumValue
+Enum(riscv_vector_bits) String(64) Value(RVV_64)
+
+EnumValue
+Enum(riscv_vector_bits) String(128) Value(RVV_128)
+
+EnumValue
+Enum(riscv_vector_bits) String(256) Value(RVV_256)
+
+EnumValue
+Enum(riscv_vector_bits) String(512) Value(RVV_512)
+
+EnumValue
+Enum(riscv_vector_bits) String(1024) Value(RVV_1024)
+
+EnumValue
+Enum(riscv_vector_bits) String(2048) Value(RVV_2048)
+
+EnumValue
+Enum(riscv_vector_bits) String(4096) Value(RVV_4096)
+
+mriscv-vector-bits=
+Target RejectNegative Joined Enum(riscv_vector_bits) Var(riscv_vector_bits) Init(RVV_SCALABLE)
+-mriscv-vector-bits=<number>	Set the number of bits in an RVV vector register.
+
 mfdiv
 Target Mask(FDIV)
 Use hardware floating-point divide and square root instructions.
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 19736b3a38f..b5abf9c45d0 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -23,6 +23,10 @@ riscv-shorten-memrefs.o: $(srcdir)/config/riscv/riscv-shorten-memrefs.cc
 	$(COMPILE) $<
 	$(POSTCOMPILE)
 
+riscv-vector.o: $(srcdir)/config/riscv/riscv-vector.cc
+	$(COMPILE) $<
+	$(POSTCOMPILE)
+
 PASSES_EXTRA += $(srcdir)/config/riscv/riscv-passes.def
 
 $(common_out_file): $(srcdir)/config/riscv/riscv-cores.def \
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 02/21]   Add RVV intrinsic framework
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
  2022-05-31  8:49 ` [PATCH 01/21] Add RVV modes and support scalable vector juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 03/21] Add RVV datatypes juzhe.zhong
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config.gcc: Add riscv-vector-builtins-functions.o and riscv-vector-builtins.o extra_objs for RVV support.
        * config/riscv/riscv-builtins.cc (riscv_init_builtins): Add RVV support.
        (riscv_builtin_decl): Add RVV support.
        (riscv_expand_builtin): Add RVV support.
        (riscv_gimple_fold_builtin): New function.
        * config/riscv/riscv-protos.h (riscv_gimple_fold_builtin): New function.
        (enum riscv_builtin_class): New macro define.
        * config/riscv/riscv-vector.cc (rvv_get_mask_mode): New function.
        * config/riscv/riscv-vector.h (rvv_get_mask_mode): New function.
        * config/riscv/riscv.cc (riscv_class_max_nregs): Add RVV register.
        (riscv_conditional_register_usage): Add RVV register.
        (TARGET_GIMPLE_FOLD_BUILTIN): New targethook.
        * config/riscv/t-riscv: New object.
        * config/riscv/md-parser: New file.
        * config/riscv/riscv-vector-builtins-functions.cc: New file.
        * config/riscv/riscv-vector-builtins-functions.def: New file.
        * config/riscv/riscv-vector-builtins-functions.h: New file.
        * config/riscv/riscv-vector-builtins-iterators.def: New file.
        * config/riscv/riscv-vector-builtins.cc: New file.
        * config/riscv/riscv-vector-builtins.def: New file.
        * config/riscv/riscv-vector-builtins.h: New file.
        * config/riscv/vector-iterators.md: New file.
        
---
 gcc/config.gcc                                |    5 +-
 gcc/config/riscv/md-parser                    |  205 ++++
 gcc/config/riscv/riscv-builtins.cc            |   88 +-
 gcc/config/riscv/riscv-protos.h               |   19 +
 .../riscv/riscv-vector-builtins-functions.cc  | 1012 +++++++++++++++++
 .../riscv/riscv-vector-builtins-functions.def |   34 +
 .../riscv/riscv-vector-builtins-functions.h   |  491 ++++++++
 .../riscv/riscv-vector-builtins-iterators.def |   12 +
 gcc/config/riscv/riscv-vector-builtins.cc     |  266 +++++
 gcc/config/riscv/riscv-vector-builtins.def    |   37 +
 gcc/config/riscv/riscv-vector-builtins.h      |   59 +
 gcc/config/riscv/riscv-vector.cc              |   17 +
 gcc/config/riscv/riscv-vector.h               |    1 +
 gcc/config/riscv/riscv.cc                     |   21 +
 gcc/config/riscv/t-riscv                      |   36 +
 gcc/config/riscv/vector-iterators.md          |   19 +
 16 files changed, 2307 insertions(+), 15 deletions(-)
 create mode 100644 gcc/config/riscv/md-parser
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-functions.cc
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-functions.def
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-functions.h
 create mode 100644 gcc/config/riscv/riscv-vector-builtins-iterators.def
 create mode 100644 gcc/config/riscv/riscv-vector-builtins.cc
 create mode 100644 gcc/config/riscv/riscv-vector-builtins.def
 create mode 100644 gcc/config/riscv/riscv-vector-builtins.h
 create mode 100644 gcc/config/riscv/vector-iterators.md

diff --git a/gcc/config.gcc b/gcc/config.gcc
index 50154c2eb3a..bdda82ae576 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -517,8 +517,11 @@ pru-*-*)
 	;;
 riscv*)
 	cpu_type=riscv
-	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-vector.o"
+	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-vector.o riscv-vector-builtins-functions.o riscv-vector-builtins.o"
 	d_target_objs="riscv-d.o"
+	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-builtins.cc \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
+	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins-functions.cc"
+	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins-functions.h"
 	;;
 rs6000*-*-*)
 	extra_options="${extra_options} g.opt fused-madd.opt rs6000/rs6000-tables.opt"
diff --git a/gcc/config/riscv/md-parser b/gcc/config/riscv/md-parser
new file mode 100644
index 00000000000..311b8709c0a
--- /dev/null
+++ b/gcc/config/riscv/md-parser
@@ -0,0 +1,205 @@
+# Mode iterators and attributes parser for RISC-V 'V' Extension for GNU compiler.
+# Copyright (C) 2022-2022 Free Software Foundation, Inc.
+# Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+# 
+# This file is part of GCC.
+# 
+# GCC is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3, or (at your option)
+# any later version.
+# 
+# GCC is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+# 
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+import os
+import sys
+
+
+def print_err(msg):
+    print("\033[31m\033[1mRISCV md parser Error: %s\033[0m\033[0m" % msg)
+
+
+def write_to_file(file_path, one_line, opt='a+'):
+    dir_name = os.path.dirname(file_path)
+    if dir_name is not None and len(dir_name) > 0:
+        os.makedirs(dir_name, exist_ok=True)
+    with open(file_path, opt, encoding="utf-8") as f:
+        f.write(one_line)
+
+
+def get_prologue():
+    return """/* Do not modify this file, it auto gen by md-parser script */
+#ifndef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
+#define DEF_RISCV_ARG_MODE_ATTR_VARIABLE(A, B)
+#endif
+
+#ifndef DEF_RISCV_ARG_MODE_ATTR
+#define DEF_RISCV_ARG_MODE_ATTR(A, B, C, D, E)
+#endif
+
+"""
+
+
+def get_epilogue():
+    return """
+#undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
+#undef DEF_RISCV_ARG_MODE_ATTR
+"""
+
+
+def gen_mode_attr(def_seg):
+    result_str = ""
+    element_cnt = 0
+    for value_seg in def_seg.value_segment_list:
+        splitted_value_seg_string = value_seg.string.split()
+        if len(splitted_value_seg_string) == 2:
+            # we only want the def_seg for intrinsics function code which has the "mode" as right hand side value
+            attr = splitted_value_seg_string[1]
+            if not attr.isdigit() and (attr.upper().replace('X', 'x') == attr) and not attr.startswith ("0x"):
+                result_str += "DEF_RISCV_ARG_MODE_ATTR(%s, %d, %s, %s, %s)\n" % (
+                    def_seg.name, element_cnt, splitted_value_seg_string[0], attr, "TARGET_FP16" if attr.count("HF") > 0 else "TARGET_HARD_FLOAT" if attr.count("SF") > 0 else "TARGET_DOUBLE_FLOAT" if attr.count("DF") > 0 else "TARGET_ANY")
+                element_cnt += 1
+            else:
+                # if the mode_attr is not a totally attribute with the "mode" value, ignore the whole pack
+                return ""
+        else:
+            print_err("Value Segment(%s) in \"%s\" maybe not right for \"%s\"" % (
+                value_seg.string, def_seg.name, def_seg.def_type))
+    if len(result_str) > 0:
+        result_str = "DEF_RISCV_ARG_MODE_ATTR_VARIABLE(%s, %d)\n%s" % (def_seg.name, element_cnt, result_str)
+    return result_str
+
+
+def gen_mode_iterator(def_seg):
+    result_str = ""
+    element_cnt = 0
+    for value_seg in def_seg.value_segment_list:
+        splitted_value_seg_string = value_seg.string.split()
+        if len(splitted_value_seg_string) == 1:
+            result_str += "DEF_RISCV_ARG_MODE_ATTR(%s, %d, %s, %s, TARGET_ANY)\n" % (
+                def_seg.name, element_cnt, value_seg.string, value_seg.string)
+            element_cnt += 1
+        elif len(splitted_value_seg_string) == 2:
+            result_str += "DEF_RISCV_ARG_MODE_ATTR(%s, %d, %s, %s, %s)\n" % (
+                def_seg.name, element_cnt, splitted_value_seg_string[0], splitted_value_seg_string[0],
+                splitted_value_seg_string[1])
+            element_cnt += 1
+        else:
+            print_err("Value Segment(%s) in \"%s\" maybe not right for \"%s\"" % (
+                value_seg.string, def_seg.name, def_seg.def_type))
+    if len(result_str) > 0:
+        result_str = "DEF_RISCV_ARG_MODE_ATTR_VARIABLE(%s, %d)\n%s" % (def_seg.name, element_cnt, result_str)
+    return result_str
+
+
+class Segment:
+    def __init__(self):
+        self.string = ""
+        self.parentheses = 0
+
+    def append(self, char_value):
+        self.string += char_value
+        self.parentheses += 1 if char_value == '(' else -1 if char_value == ')' else 0
+
+    def is_intact(self):
+        return self.parentheses == 0
+
+    def strip_self(self):
+        self.string = self.string.replace('(', '').replace(')', '').replace('"', ' ')
+        self.string = self.string.strip()
+        return self
+
+    def __str__(self):
+        return self.string
+
+
+class DefSegment(Segment):
+    def __init__(self):
+        super().__init__()
+        self.def_type = ""
+        self.name = ""
+        self.value_str = ""
+        self.value_segment_list = []
+
+    def make_value_segments(self):
+        # split value string into multiple value segments which will contain a single or a double value array
+        i = 0
+        current_value_segment = None
+        while i < len(self.value_str):
+            if self.value_str[i] != ' ' and current_value_segment is None:
+                # new segment start
+                current_value_segment = Segment()
+                current_value_segment.append(self.value_str[i])
+            elif current_value_segment is not None:
+                current_value_segment.append(self.value_str[i])
+                if current_value_segment.is_intact() and (
+                        current_value_segment.string.endswith(' ') or current_value_segment.string.endswith(')')):
+                    self.value_segment_list.append(current_value_segment.strip_self())
+                    current_value_segment = None
+            i += 1
+        # append the last segment which is missed by the while loop
+        if current_value_segment is not None:
+            self.value_segment_list.append(current_value_segment.strip_self())
+
+    def parse(self):
+        if len(self.string) == 0:
+            return False
+        if self.string.count('[') != 1 or self.string.count(']') != 1:
+            print_err("\"%s\" is not a valid [...] format string" % self.string)
+            return False
+        # ignore the '(' and ')'
+        self.string = self.string[1:len(self.string) - 1]
+        self.def_type = self.string.split()[0]
+        self.name = self.string.split()[1]
+        # get the value string inside '[' and ']'
+        self.value_str = self.string[self.string.index('[') + 1:self.string.index(']')].strip()
+        self.make_value_segments()
+        return True
+
+
+if __name__ == '__main__':
+    local_dir = os.path.dirname(os.path.realpath(__file__))
+    output_file = local_dir + '/' + sys.argv[1]
+    target_md_files = []
+    for arg in sys.argv[2:]:
+        target_md_files.append(local_dir + '/' + arg)
+    segment_list = []
+    for md_file in target_md_files:
+        with open(md_file) as file:
+            current_segment = None
+            for line in file.readlines():
+                stripped_line = line.strip()
+                if stripped_line.startswith(";;"):
+                    continue
+                if stripped_line.count("(") > 0 or current_segment is not None:
+                    char_index = 0
+                    # add a space at the end of the stripped_line for further split
+                    stripped_line += " "
+                    while char_index < len(stripped_line):
+                        if stripped_line[char_index] == '(' and current_segment is None:
+                            # new segment start
+                            current_segment = DefSegment()
+                            current_segment.append('(')
+                        elif current_segment is not None:
+                            current_segment.append(stripped_line[char_index])
+                            if current_segment.is_intact():
+                                segment_list.append(current_segment)
+                                current_segment = None
+                        char_index += 1
+    output_str = get_prologue()
+    for segment in segment_list:
+        if not segment.parse():
+            continue
+        if segment.def_type == "define_mode_iterator":
+            output_str += gen_mode_iterator(segment)
+        elif segment.def_type == "define_mode_attr":
+            output_str += gen_mode_attr(segment)
+    output_str += get_epilogue()
+    write_to_file(output_file, output_str, opt='w')
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
index 795132a0c16..9bd2f1227a8 100644
--- a/gcc/config/riscv/riscv-builtins.cc
+++ b/gcc/config/riscv/riscv-builtins.cc
@@ -36,6 +36,13 @@ along with GCC; see the file COPYING3.  If not see
 #include "stor-layout.h"
 #include "expr.h"
 #include "langhooks.h"
+#include "backend.h"
+#include "gimple.h"
+#include "stringpool.h"
+#include "explow.h"
+#include "gimple-iterator.h"
+#include "riscv-protos.h"
+#include "riscv-vector-builtins.h"
 
 /* Macros to create an enumeration identifier for a function prototype.  */
 #define RISCV_FTYPE_NAME0(A) RISCV_##A##_FTYPE
@@ -189,6 +196,7 @@ riscv_build_function_type (enum riscv_function_type type)
 void
 riscv_init_builtins (void)
 {
+  riscv_vector::init_builtins ();
   for (size_t i = 0; i < ARRAY_SIZE (riscv_builtins); i++)
     {
       const struct riscv_builtin_description *d = &riscv_builtins[i];
@@ -196,7 +204,8 @@ riscv_init_builtins (void)
 	{
 	  tree type = riscv_build_function_type (d->prototype);
 	  riscv_builtin_decls[i]
-	    = add_builtin_function (d->name, type, i, BUILT_IN_MD, NULL, NULL);
+	    = add_builtin_function (d->name, type, (i << RISCV_BUILTIN_SHIFT) + RISCV_BUILTIN_GENERAL,
+				    BUILT_IN_MD, NULL, NULL);
 	  riscv_builtin_decl_index[d->icode] = i;
 	}
     }
@@ -205,11 +214,20 @@ riscv_init_builtins (void)
 /* Implement TARGET_BUILTIN_DECL.  */
 
 tree
-riscv_builtin_decl (unsigned int code, bool initialize_p ATTRIBUTE_UNUSED)
+riscv_builtin_decl (unsigned int code, bool initialize_p)
 {
-  if (code >= ARRAY_SIZE (riscv_builtins))
-    return error_mark_node;
-  return riscv_builtin_decls[code];
+  unsigned int subcode = code >> RISCV_BUILTIN_SHIFT;
+  switch (code & RISCV_BUILTIN_CLASS)
+    {
+    case RISCV_BUILTIN_GENERAL:
+      if (subcode >= ARRAY_SIZE (riscv_builtins))
+	return error_mark_node;
+      return riscv_builtin_decls[subcode];
+
+    case RISCV_BUILTIN_VECTOR:
+      return riscv_vector::builtin_decl (subcode, initialize_p);
+    }
+  return error_mark_node;
 }
 
 /* Take argument ARGNO from EXP's argument list and convert it into
@@ -271,20 +289,29 @@ riscv_expand_builtin_direct (enum insn_code icode, rtx target, tree exp,
 
 rtx
 riscv_expand_builtin (tree exp, rtx target, rtx subtarget ATTRIBUTE_UNUSED,
-		      machine_mode mode ATTRIBUTE_UNUSED,
-		      int ignore ATTRIBUTE_UNUSED)
+                      machine_mode mode ATTRIBUTE_UNUSED,
+                      int ignore ATTRIBUTE_UNUSED)
 {
   tree fndecl = TREE_OPERAND (CALL_EXPR_FN (exp), 0);
   unsigned int fcode = DECL_MD_FUNCTION_CODE (fndecl);
-  const struct riscv_builtin_description *d = &riscv_builtins[fcode];
-
-  switch (d->builtin_type)
+  unsigned int subcode = fcode >> RISCV_BUILTIN_SHIFT;
+  switch (fcode & RISCV_BUILTIN_CLASS)
     {
-    case RISCV_BUILTIN_DIRECT:
-      return riscv_expand_builtin_direct (d->icode, target, exp, true);
+    case RISCV_BUILTIN_VECTOR:
+      return riscv_vector::expand_builtin (subcode, exp, target);
+    case RISCV_BUILTIN_GENERAL:
+      {
+        const struct riscv_builtin_description *d = &riscv_builtins[subcode];
 
-    case RISCV_BUILTIN_DIRECT_NO_TARGET:
-      return riscv_expand_builtin_direct (d->icode, target, exp, false);
+        switch (d->builtin_type)
+          {
+          case RISCV_BUILTIN_DIRECT:
+            return riscv_expand_builtin_direct (d->icode, target, exp, true);
+
+          case RISCV_BUILTIN_DIRECT_NO_TARGET:
+            return riscv_expand_builtin_direct (d->icode, target, exp, false);
+          }
+      }
     }
 
   gcc_unreachable ();
@@ -307,3 +334,36 @@ riscv_atomic_assign_expand_fenv (tree *hold, tree *clear, tree *update)
   *clear = build_call_expr (fsflags, 1, old_flags);
   *update = NULL_TREE;
 }
+
+/* Implement TARGET_GIMPLE_FOLD_BUILTIN.  */
+
+bool
+riscv_gimple_fold_builtin (gimple_stmt_iterator *gsi)
+{
+
+  gcall *stmt = as_a <gcall *> (gsi_stmt (*gsi));
+  tree fndecl = gimple_call_fndecl (stmt);
+  unsigned int code = DECL_MD_FUNCTION_CODE (fndecl);
+  unsigned int subcode = code >> RISCV_BUILTIN_SHIFT;
+  gimple *new_stmt = NULL;
+  switch (code & RISCV_BUILTIN_CLASS)
+    {
+    /* Gernal builtin can fold gimple if necessary,
+       may wrapp it into a function in the future.  */
+    case RISCV_BUILTIN_GENERAL:
+      return false;
+
+    case RISCV_BUILTIN_VECTOR:
+      new_stmt = riscv_vector::gimple_fold_builtin (subcode, gsi, stmt);
+      break;
+    }
+
+  if (!new_stmt)
+    return false;
+
+  gsi_replace (gsi, new_stmt, true);
+
+  return true;
+}
+
+#include "gt-riscv-builtins.h"
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 19c50f0e702..1cb3586d1f1 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -87,6 +87,7 @@ extern void riscv_atomic_assign_expand_fenv (tree *, tree *, tree *);
 extern rtx riscv_expand_builtin (tree, rtx, rtx, machine_mode, int);
 extern tree riscv_builtin_decl (unsigned int, bool);
 extern void riscv_init_builtins (void);
+extern bool riscv_gimple_fold_builtin (gimple_stmt_iterator *);
 
 /* Routines implemented in riscv-common.cc.  */
 extern std::string riscv_arch_str (bool version_p = true);
@@ -116,4 +117,22 @@ extern unsigned int rvv_offset_temporaries (bool, poly_int64);
 extern enum vlmul_field_enum riscv_classify_vlmul_field (machine_mode);
 extern int rvv_regsize (machine_mode);
 
+/* We classify builtin types into two classes:
+   1. General builtin class which is using the
+      original builtin architecture built in
+      RISCV.
+   2. Vector builtin class which is a special
+      builtin architecture that implement
+      intrinsic short into "pragma".  */
+enum riscv_builtin_class
+{
+  RISCV_BUILTIN_GENERAL,
+  RISCV_BUILTIN_VECTOR
+};
+
+const unsigned int RISCV_BUILTIN_SHIFT = 1;
+
+/* Mask that selects the vector_builtin_class part of a function code.  */
+const unsigned int RISCV_BUILTIN_CLASS = (1 << RISCV_BUILTIN_SHIFT) - 1;
+
 #endif /* ! GCC_RISCV_PROTOS_H */
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc
new file mode 100644
index 00000000000..19bcb66a83f
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc
@@ -0,0 +1,1012 @@
+/* Intrinsic implementation for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#define IN_TARGET_CODE 1
+
+#include "riscv-vector-builtins-functions.h"
+namespace riscv_vector
+{
+
+extern hash_table<registered_function_hasher> *function_table;
+extern GTY (()) tree
+    vector_types[MAX_TUPLE_NUM][NUM_VECTOR_TYPES + 1][MAX_VLMUL_FIELD];
+extern GTY (()) tree
+    vector_pointer_types[NUM_VECTOR_TYPES + 1][MAX_VLMUL_FIELD];
+extern GTY(()) tree scalar_types[NUM_VECTOR_TYPES];
+extern GTY(()) tree scalar_pointer_types[NUM_VECTOR_TYPES];
+extern GTY(()) tree const_scalar_pointer_types[NUM_VECTOR_TYPES];
+
+extern GTY (()) vec<registered_function *, va_gc> *registered_functions;
+
+/* Flags that describe what a function might do, in addition to reading
+   its arguments and returning a result.  */
+static const unsigned int CP_READ_FPCR = 1U << 0;
+static const unsigned int CP_RAISE_FP_EXCEPTIONS = 1U << 1;
+static const unsigned int CP_RAISE_LD_EXCEPTIONS = 1U << 2;
+static const unsigned int CP_READ_MEMORY = 1U << 3;
+static const unsigned int CP_WRITE_MEMORY = 1U << 4;
+static const unsigned int CP_READ_CSR = 1U << 5;
+static const unsigned int CP_WRITE_CSR = 1U << 6;
+
+/* True if we've already complained about attempts to use functions
+   when the required extension is disabled.  */
+static bool reported_missing_extension_p;
+
+static tree
+mode2mask_t (machine_mode mode)
+{
+  int factor = exact_div (GET_MODE_NUNITS (mode), GET_MODE_NUNITS (VNx2BImode))
+                   .to_constant ();
+  factor = exact_log2 (factor);
+  return vector_types[0][VECTOR_TYPE_bool][factor];
+}
+
+static enum vector_type_index
+get_type_idx (machine_mode mode, bool u)
+{
+  #define RVV_INT_TYPE(MODE, TYPE, SEW) \
+    case MODE: \
+      return u ? VECTOR_TYPE_u##TYPE##SEW : VECTOR_TYPE_##TYPE##SEW; \
+      break;
+  #define RVV_FLOAT_TYPE(MODE, TYPE, SEW) \
+    case MODE: \
+      return VECTOR_TYPE_##TYPE##SEW; \
+      break;
+  
+  switch (mode)
+    {
+    case BImode:
+      return VECTOR_TYPE_bool;
+    RVV_INT_TYPE (QImode, int, 8)
+    RVV_INT_TYPE (HImode, int, 16)
+    RVV_INT_TYPE (SImode, int, 32)
+    RVV_INT_TYPE (DImode, int, 64)
+    RVV_FLOAT_TYPE (SFmode, float, 32)
+    RVV_FLOAT_TYPE (DFmode, float, 64)
+    default:
+      gcc_unreachable ();
+    }
+  
+  return (enum vector_type_index) 0;
+}
+
+static int
+get_lmul_idx (machine_mode mode)
+{
+  machine_mode innermode = GET_MODE_INNER (mode);
+  bool is_bool_p = GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL ? true : false;
+  machine_mode base_mode = is_bool_p ? VNx2QImode : VNx2BImode;
+  int offset = exact_div (GET_MODE_NUNITS (mode), GET_MODE_NUNITS (base_mode))
+                   .to_constant ();
+  if (is_bool_p)
+    return exact_log2 (offset);
+  else
+    {
+      int nf = 1;
+      offset = exact_log2 (offset / nf);
+      int factor = exact_log2 (GET_MODE_BITSIZE (innermode).to_constant () /
+                               GET_MODE_BITSIZE (QImode));
+      return offset + factor;
+    }
+}
+
+static tree
+get_dt_t (machine_mode mode, bool u, bool ptr = false, bool c = false)
+{
+  if (mode == VOIDmode)
+    return void_type_node;
+    
+  machine_mode innermode = GET_MODE_INNER (mode);
+  enum vector_type_index base = get_type_idx (innermode, u);
+  tree type = NULL_TREE;
+  if (VECTOR_MODE_P (mode))
+    {
+      int offset = get_lmul_idx (mode);      
+      if (ptr)
+        type = vector_pointer_types[base][offset];
+      else
+        type = vector_types[0][base][offset];
+      
+      gcc_assert (type);
+      return type;
+    }
+  
+  if (ptr)
+    {
+      if (c)
+        type = const_scalar_pointer_types[base];
+      else
+        type = scalar_pointer_types[base];
+    }
+  else
+    type = scalar_types[base];
+  
+  gcc_assert (type);
+  return type;
+}
+
+/* Return true if the function has no return value.  */
+static bool
+function_returns_void_p (tree fndecl)
+{
+  return TREE_TYPE (TREE_TYPE (fndecl)) == void_type_node;
+}
+
+/* Take argument ARGNO from EXP's argument list and convert it into
+   an expand operand.  Store the operand in *OP.  */
+
+static void
+add_input_operand (struct expand_operand *op, tree exp, unsigned argno)
+{
+  tree arg = CALL_EXPR_ARG (exp, argno);
+  rtx x = expand_normal (arg);
+  create_input_operand (op, x, TYPE_MODE (TREE_TYPE (arg)));
+}
+
+/* Expand instruction ICODE as part of a built-in function sequence.
+   Use the first NOPS elements of OPS as the instruction's operands.
+   HAS_TARGET_P is true if operand 0 is a target; it is false if the
+   instruction has no target.
+
+   Return the target rtx if HAS_TARGET_P, otherwise return const0_rtx.  */
+
+static rtx
+generate_builtin_insn (enum insn_code icode, unsigned int n_ops,
+                       struct expand_operand *ops, bool has_target_p)
+{
+  if (!maybe_expand_insn (icode, n_ops, ops))
+    {
+      error ("invalid argument to built-in function");
+      return has_target_p ? gen_reg_rtx (ops[0].mode) : const0_rtx;
+    }
+
+  return has_target_p ? ops[0].value : const0_rtx;
+}
+
+/* Return a hash code for a function_instance.  */
+static hashval_t
+get_string_hash (const char * input_string)
+{
+  if (!input_string || strlen (input_string) == 0)
+    return 0;
+
+  inchash::hash h;
+
+  for (unsigned int i = 0; i < strlen (input_string); i += 4)
+    {
+      unsigned int four_chars = 0;
+
+      for (unsigned int j = i; j < strlen (input_string) && j < i + 4; j++)
+        {
+          four_chars |= input_string[j] << (8 * (j - i));
+        }
+
+      h.add_int (four_chars);
+    }
+
+  return h.end ();
+}
+
+/* Report an error against LOCATION that the user has tried to use
+   function FNDECL when extension EXTENSION is disabled.  */
+static void
+report_missing_extension (location_t location, tree fndecl,
+                          const char *extension)
+{
+  /* Avoid reporting a slew of messages for a single oversight.  */
+  if (reported_missing_extension_p)
+    return;
+
+  error_at (location, "rvv function %qD requires ISA extension %qs", fndecl,
+            extension);
+  inform (location,
+          "you can enable %qs using the command-line"
+          " option %<-march%>",
+          extension);
+  reported_missing_extension_p = true;
+}
+
+/* Add attribute NAME to ATTRS.  */
+static tree
+add_attribute (const char *name, tree attrs)
+{
+  return tree_cons (get_identifier (name), NULL_TREE, attrs);
+}
+
+inline hashval_t
+registered_function_hasher::hash (value_type value)
+{
+  return value->instance.hash ();
+}
+
+inline bool
+registered_function_hasher::equal (value_type value, const compare_type &key)
+{
+  return value->instance == key;
+}
+
+/* function_instance class implemention */
+
+function_instance::function_instance (function_builder *__builder,
+                                      const char *__base_name,
+                                      vector_arg_modes &__arg_pattern,
+                                      enum predication_index __pred,
+                                      enum operation_index __operation)
+    : m_builder (__builder), m_base_name (__base_name),
+      m_target_arg_pattern (__arg_pattern), m_target_pred (__pred),
+      m_target_operation (__operation)
+{
+  function_name[0] = '\0';
+}
+
+function_instance::function_instance (const char *__name)
+{
+  memcpy (function_name, __name, NAME_MAXLEN);
+}
+
+function_instance::~function_instance ()
+{
+}
+
+inline bool
+function_instance::operator== (const function_instance &other) const
+{
+  return (strcmp (function_name, other.function_name) == 0);
+}
+
+inline bool
+function_instance::operator!= (const function_instance &other) const
+{
+  return !operator== (other);
+}
+
+/* Return a hash code for a function_instance.  */
+hashval_t
+function_instance::hash () const
+{
+  return get_string_hash (function_name);
+}
+
+bool
+function_instance::check (location_t, tree, tree, unsigned int, tree *) const
+{
+  return true;
+}
+
+/* Return a set of CP_* flags that describe what the function could do,
+   taking the command-line flags into account.  */
+unsigned int
+function_instance::call_properties () const
+{
+  unsigned int flags = m_builder->call_properties ();
+
+  /* -fno-trapping-math means that we can assume any FP exceptions
+     are not user-visible.  */
+  if (!flag_trapping_math)
+    flags &= ~CP_RAISE_FP_EXCEPTIONS;
+
+  return flags;
+}
+
+/* Return true if calls to the function could read some form of
+   global state.  */
+bool
+function_instance::reads_global_state_p () const
+{
+  unsigned int flags = call_properties ();
+
+  /* Preserve any dependence on rounding mode, flush to zero mode, etc.
+     There is currently no way of turning this off; in particular,
+     -fno-rounding-math (which is the default) means that we should make
+     the usual assumptions about rounding mode, which for intrinsics means
+     acting as the instructions do.  */
+  if (flags & CP_READ_FPCR)
+    return true;
+
+  /* Handle direct reads of global state.  */
+  return flags & (CP_READ_MEMORY | CP_READ_CSR);
+}
+
+/* Return true if calls to the function could modify some form of
+   global state.  */
+bool
+function_instance::modifies_global_state_p () const
+{
+  unsigned int flags = call_properties ();
+
+  /* Preserve any exception state written back to the FPCR,
+     unless -fno-trapping-math says this is unnecessary.  */
+  if (flags & (CP_RAISE_FP_EXCEPTIONS | CP_RAISE_LD_EXCEPTIONS))
+    return true;
+
+  /* Handle direct modifications of global state.  */
+  return flags & (CP_WRITE_MEMORY | CP_WRITE_CSR);
+}
+
+/* Return true if calls to the function could raise a signal.  */
+bool
+function_instance::could_trap_p () const
+{
+  unsigned int flags = call_properties ();
+
+  /* Handle functions that could raise SIGFPE.  */
+  if (flags & (CP_RAISE_FP_EXCEPTIONS | CP_RAISE_LD_EXCEPTIONS))
+    return true;
+
+  /* Handle functions that could raise SIGBUS or SIGSEGV.  */
+  if (flags & (CP_READ_MEMORY | CP_WRITE_MEMORY))
+    return true;
+
+  return false;
+}
+
+const char *
+function_instance::get_base_name () const
+{
+  return m_base_name;
+}
+
+vector_arg_modes
+function_instance::get_arg_pattern () const
+{
+  return m_target_arg_pattern;
+}
+
+enum predication_index
+function_instance::get_pred () const
+{
+  return m_target_pred;
+}
+
+unsigned int
+function_instance::get_vma_vta () const
+{
+  return any_policy;
+}
+
+enum operation_index
+function_instance::get_operation () const
+{
+  return m_target_operation;
+}
+
+enum data_type_index *
+function_instance::get_data_type_list () const
+{
+  return m_builder->get_data_type_list ();
+}
+
+function_builder *
+function_instance::builder () const
+{
+  return m_builder;
+}
+
+/* function_builder class implemention */
+
+function_builder::function_builder (const char *__base_name,
+                                    vector_arg_all_modes &__arg_patterns,
+                                    uint64_t __pattern,
+                                    uint64_t __preds,
+                                    uint64_t __op_types,
+                                    const unsigned int __extensions)
+    : m_base_name (__base_name), m_target_arg_patterns (__arg_patterns),
+      m_target_pattern (__pattern), m_target_preds (__preds),
+      m_target_op_types (__op_types), m_required_extensions (__extensions)
+{
+  m_iter_idx_cnt = 0;
+  m_iter_arg_cnt = 0;
+  m_iter_arg_idx_list = (unsigned int *)xmalloc (m_target_arg_patterns.arg_len *
+                                                 sizeof (unsigned int));
+
+  for (unsigned int i = 0; i < m_target_arg_patterns.arg_len; i++)
+    {
+      // use mode iterator as mode iter
+      if (m_target_arg_patterns.target_op_list[i] < 0)
+        {
+          m_iter_idx_cnt =
+              (m_iter_idx_cnt == 0)
+                  ? m_target_arg_patterns.arg_list[i]->attr_len
+                  : m_iter_idx_cnt *
+                        m_target_arg_patterns.arg_list[i]->attr_len;
+          m_iter_arg_idx_list[m_iter_arg_cnt++] = i;
+        }
+    }
+
+  m_direct_overloads = lang_GNU_CXX ();
+
+  gcc_assert (m_iter_arg_cnt > 0);
+  gcc_assert (m_iter_idx_cnt > 0);
+  
+  gcc_obstack_init (&m_string_obstack);
+}
+
+function_builder::~function_builder ()
+{
+  free (m_iter_arg_idx_list);
+  obstack_free (&m_string_obstack, NULL);
+}
+
+/* Add NAME to the end of the function name being built.  */
+void
+function_builder::append_name (const char *name)
+{
+  obstack_grow (&m_string_obstack, name, strlen (name));
+}
+
+/* Zero-terminate and complete the function name being built.  */
+char *
+function_builder::finish_name ()
+{
+  obstack_1grow (&m_string_obstack, 0);
+  return (char *) obstack_finish (&m_string_obstack);
+}
+
+rtx
+function_builder::expand_builtin_insn (enum insn_code icode, tree exp,
+                                       rtx target,
+                                       const function_instance &instance) const
+{
+  gcc_assert (call_expr_nargs (exp) > 0);
+  struct expand_operand ops[MAX_RECOG_OPERANDS];
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  tree fndecl = TREE_OPERAND (CALL_EXPR_FN (exp), 0);
+  enum predication_index pred = instance.get_pred ();
+
+  /* Map any target to operand 0.  */
+  int opno = 0;
+  int offset = 0;
+
+  if (!function_returns_void_p (fndecl))
+    create_output_operand (&ops[opno++], target, mode);
+
+  if (need_mask_operand_p ())
+    {
+      if (has_mask_arg_p (pred))
+        add_input_operand (&ops[opno++], exp, offset++);
+      else
+        create_input_operand (&ops[opno++], const0_rtx, Pmode);
+    }
+
+  if (need_dest_operand_p ())
+    {
+      if (has_dest_arg_p (pred))
+        add_input_operand (&ops[opno++], exp, offset++);
+      else
+        create_input_operand (&ops[opno++], const0_rtx, Pmode);
+    }
+
+  for (int argno = offset; argno < call_expr_nargs (exp); argno++)
+    add_input_operand (&ops[opno++], exp, argno);
+
+  if (pred != PRED_none)
+    create_input_operand (&ops[opno++], GEN_INT (get_policy (pred)), Pmode);
+
+  /* Map the arguments to the other operands.  */
+  gcc_assert (opno == insn_data[icode].n_generator_args);
+  return generate_builtin_insn (icode, opno, ops,
+                                !function_returns_void_p (fndecl));
+}
+
+gimple *
+function_builder::fold (const function_instance &, gimple_stmt_iterator *,
+                        gcall *) const
+{
+  return NULL;
+}
+
+tree
+function_builder::get_return_type (const function_instance &) const
+{
+  return void_type_node;
+}
+
+size_t
+function_builder::get_dest_arguments_length () const
+{
+  return 1;
+}
+
+tree
+function_builder::get_mask_type (tree return_type,
+                                 const function_instance &instance,
+                                 const vec<tree> &) const
+{
+  tree type = return_type;
+  if (!VECTOR_MODE_P (TYPE_MODE (type)))
+    {
+      /* Fetch the vector mode start from arg[0]. */
+      for (unsigned int i = 0; i < instance.get_arg_pattern ().arg_len; i++)
+        {
+          if (VECTOR_MODE_P (instance.get_arg_pattern ().arg_list[i]))
+            {
+              type =
+                  get_dt_t (instance.get_arg_pattern ().arg_list[i],
+                            instance.get_data_type_list ()[i] == DT_unsigned);
+              break;
+            }
+        }
+    }
+  gcc_assert (VECTOR_MODE_P (TYPE_MODE (type)));
+  if (GET_MODE_CLASS (TYPE_MODE (type)) == MODE_VECTOR_BOOL)
+    return type;
+  else
+    {
+      machine_mode mask_mode;
+      gcc_assert (
+          rvv_get_mask_mode (TYPE_MODE (type)).exists (&mask_mode));
+      return mode2mask_t (mask_mode);
+    }
+}
+
+void
+function_builder::get_argument_types (const function_instance &,
+                                      vec<tree> &) const
+{
+}
+
+char *
+function_builder::assemble_name (function_instance &)
+{
+  return nullptr;
+}
+
+uint64_t
+function_builder::get_pattern () const
+{
+  return m_target_pattern;
+}
+
+bool
+function_builder::need_mask_operand_p () const
+{
+  uint64_t pat = get_pattern ();
+
+  return (pat & PAT_mask) || (pat & PAT_merge);
+}
+
+bool
+function_builder::need_dest_operand_p () const
+{
+  uint64_t pat = get_pattern ();
+
+  return (pat & PAT_tail) || ((pat & PAT_mask) && !(pat & PAT_ignore_policy));
+}
+
+bool
+function_builder::has_mask_arg_p (enum predication_index pred) const
+{
+  uint64_t pat = get_pattern ();
+
+  return pred == PRED_m || pred == PRED_tam || pred == PRED_tum ||
+         pred == PRED_tama || pred == PRED_tamu || pred == PRED_tuma ||
+         pred == PRED_tumu || pred == PRED_ma || pred == PRED_mu ||
+         (pat & PAT_merge);
+}
+
+bool
+function_builder::has_dest_arg_p (enum predication_index pred) const
+{
+  uint64_t pat = get_pattern ();
+
+  switch (pred)
+    {
+    case PRED_void:
+      return (pat & PAT_void_dest) || (pat & PAT_dest);
+    case PRED_ta:
+    case PRED_tama:
+      return (pat & PAT_dest);
+    case PRED_m:
+      return !(pat & PAT_ignore_policy);
+    case PRED_tu:
+    case PRED_tum:
+    case PRED_mu:
+    case PRED_tamu:
+    case PRED_tuma:
+    case PRED_tumu:
+      return true;
+    default:
+      return false;
+    }
+}
+
+bool
+function_builder::can_be_overloaded_p (const function_instance &) const
+{
+  return false;
+}
+
+unsigned int
+function_builder::get_policy (enum predication_index pred) const
+{
+  uint64_t pat = get_pattern ();
+
+  switch (pred)
+    {
+    case PRED_void:
+      return (pat & PAT_void_dest || pat & PAT_dest) ? tu_policy : ta_policy;
+    case PRED_m:
+      if (pat & PAT_ignore_policy)
+        return any_policy;
+      else if (pat & PAT_ignore_mask_policy)
+        return tu_policy;
+      else if (pat & PAT_ignore_tail_policy)
+        return mu_policy;
+      else
+        return tumu_policy;
+    case PRED_ta:
+    case PRED_tam:
+      return ta_policy;
+    case PRED_tu:
+    case PRED_tum:
+      return tu_policy;
+    case PRED_ma:
+      return ma_policy;
+    case PRED_mu:
+      return mu_policy;
+    case PRED_tama:
+      return tama_policy;
+    case PRED_tamu:
+      return tamu_policy;
+    case PRED_tuma:
+      return tuma_policy;
+    case PRED_tumu:
+      return tumu_policy;
+    default:
+      return any_policy;
+    }
+}
+
+size_t
+function_builder::get_position_of_mask_arg (enum predication_index) const
+{
+  return 0;
+}
+
+size_t
+function_builder::get_position_of_dest_arg (enum predication_index pred) const
+{
+  uint64_t pat = get_pattern ();
+  if (pred == PRED_tu ||
+      (pred == PRED_void && (pat & PAT_void_dest || pat & PAT_dest)) ||
+      (pred == PRED_ta && pat & PAT_dest))
+    return 0;
+  else
+    return 1;
+}
+
+unsigned int
+function_builder::call_properties () const
+{
+  return 0;
+}
+
+enum data_type_index *
+function_builder::get_data_type_list () const
+{
+  return m_target_arg_patterns.dt_list;
+}
+
+/* Return the appropriate function attributes for INSTANCE.  */
+tree
+function_builder::get_attributes (const function_instance &instance) const
+{
+  tree attrs = NULL_TREE;
+
+  if (!instance.modifies_global_state_p ())
+    {
+      if (instance.reads_global_state_p ())
+        attrs = add_attribute ("pure", attrs);
+      else
+        attrs = add_attribute ("const", attrs);
+    }
+
+  if (!flag_non_call_exceptions || !instance.could_trap_p ())
+    attrs = add_attribute ("nothrow", attrs);
+
+  return add_attribute ("leaf", attrs);
+}
+
+/* Add a function called NAME with type FNTYPE and attributes ATTRS.
+   INSTANCE describes what the function does and OVERLOADED_P indicates
+   whether it is overloaded.  REQUIRED_EXTENSIONS are the set of
+   architecture extensions that the function requires.  */
+registered_function &
+function_builder::add_function (const function_instance &instance,
+                                const char *name, tree fntype, tree attrs,
+                                bool overloaded_p,
+                                bool placeholder_p) const
+{
+  unsigned int code = vec_safe_length (registered_functions);
+  code = (code << RISCV_BUILTIN_SHIFT) + RISCV_BUILTIN_VECTOR;
+  
+  /* We need to be able to generate placeholders to enusre that we have a
+     consistent numbering scheme for function codes between the C and C++
+     frontends, so that everything ties up in LTO.
+
+     Currently, tree-streamer-in.c:unpack_ts_function_decl_value_fields
+     validates that tree nodes returned by TARGET_BUILTIN_DECL are non-NULL and
+     some node other than error_mark_node. This is a holdover from when builtin
+     decls were streamed by code rather than by value.
+
+     Ultimately, we should be able to remove this validation of BUILT_IN_MD
+     nodes and remove the target hook. For now, however, we need to appease the
+     validation and return a non-NULL, non-error_mark_node node, so we
+     arbitrarily choose integer_zero_node.  */
+  tree decl = placeholder_p
+                  ? integer_zero_node
+                  : simulate_builtin_function_decl (input_location, name,
+                                                    fntype, code, NULL, attrs);
+
+  registered_function &rfn = *ggc_alloc<registered_function> ();
+  rfn.instance = instance;
+  rfn.decl = decl;
+  rfn.overloaded_p = overloaded_p;
+  rfn.required_extensions = m_required_extensions;
+  vec_safe_push (registered_functions, &rfn);
+
+  return rfn;
+}
+
+/* Add a built-in function for INSTANCE, with the argument types given
+   by ARGUMENT_TYPES and the return type given by RETURN_TYPE.
+   REQUIRED_EXTENSIONS are the set of architecture extensions that the
+   function requires.  FORCE_DIRECT_OVERLOADS is true if there is a
+   one-to-one mapping between "short" and "full" names, and if standard
+   overload resolution therefore isn't necessary.  */
+void
+function_builder::add_unique_function (function_instance &instance,
+                                       tree return_type,
+                                       vec<tree> &argument_types)
+{
+  /* Add the function under its full (unique) name.  */
+  char *overloaded_name = this->assemble_name (instance);
+  if (instance.function_name[0] == '\0')
+    return;
+    
+  tree fntype = build_function_type_array (
+      return_type, argument_types.length (), argument_types.address ());
+  tree attrs = get_attributes (instance);
+  registered_function &rfn =
+      add_function (instance, instance.function_name, fntype, attrs, false, false);
+
+  /* Enter the function into the hash table.  */
+  hashval_t hashval = instance.hash ();
+  registered_function **rfn_slot =
+      function_table->find_slot_with_hash (instance, hashval, INSERT);
+  
+  if (*rfn_slot)
+    {
+      error ("duplicate function name: %s", instance.function_name);
+      gcc_unreachable ();
+    }
+
+  *rfn_slot = &rfn;
+
+  if (overloaded_name)
+    {
+      /* Attribute lists shouldn't be shared.  */
+      tree attrs = get_attributes (instance);
+      bool placeholder_p = !m_direct_overloads;
+        add_function (instance, overloaded_name, fntype, attrs,
+                      false, placeholder_p);
+      obstack_free (&m_string_obstack, overloaded_name);
+    }
+}
+
+/* Check whether all the RISCV_* values in REQUIRED_EXTENSIONS are
+   enabled, given that those extensions are required for function FNDECL.
+   Report an error against LOCATION if not.  */
+bool
+function_builder::check_required_extensions (location_t location, tree fndecl,
+                                             uint64_t required_extensions) const
+{
+  /* For the instructions doesn't need any extension, we return true.  */
+  if (required_extensions == 0)
+    return true;
+
+  /* check vector extension enable or not.  */
+  if ((required_extensions & 0x1) && !TARGET_VECTOR)
+    report_missing_extension (location, fndecl, "V");
+
+  /* check f extension enable or not.  */
+  if (((required_extensions >> 4) & 0x1) && !TARGET_HARD_FLOAT)
+    report_missing_extension (location, fndecl, "F");
+
+  /* check d extension enable or not.  */
+  if (((required_extensions >> 5) & 0x1) && !TARGET_DOUBLE_FLOAT)
+    report_missing_extension (location, fndecl, "D");
+
+  return true;
+}
+
+/* If INSTANCE has a governing predicate, add it to the list of argument
+   types in ARGUMENT_TYPES.  RETURN_TYPE is the type returned by the
+   function.  */
+void
+function_builder::apply_predication (const function_instance &instance,
+                                     tree return_type,
+                                     vec<tree> &argument_types) const
+{
+  /* check if mask parameter need. */
+  if (has_mask_arg_p (instance.get_pred ()))
+    {
+      argument_types.quick_insert (
+          get_position_of_mask_arg (instance.get_pred ()),
+          get_mask_type (return_type, instance, argument_types));
+    }
+
+  /* check if dest parameter need. */
+  if (has_dest_arg_p (instance.get_pred ()))
+    {
+      size_t size = get_dest_arguments_length ();
+      if (argument_types.is_empty ())
+        for (size_t i = 0; i < size; i += 1)
+          argument_types.quick_push (return_type);
+      else
+        for (size_t i = 0; i < size; i += 1)
+          argument_types.quick_insert (get_position_of_dest_arg (instance.get_pred ()) + i,
+                                       return_type);
+    }
+
+  /* check if vl parameter need  */
+  if (instance.get_pred () != PRED_none)
+    argument_types.quick_push (size_type_node);
+}
+
+void
+function_builder::build_one (function_instance &instance)
+{
+  /* Byte forms of vlxusegei take 21 arguments.  */
+  auto_vec<tree, 21> argument_types;
+  tree return_type = get_return_type (instance);
+  get_argument_types (instance, argument_types);
+  apply_predication (instance, return_type, argument_types);
+  add_unique_function (instance, return_type, argument_types);
+}
+
+vector_arg_modes &
+function_builder::get_arg_modes_by_iter_idx (unsigned int iter_idx) const
+{
+  if (iter_idx >= m_iter_idx_cnt)
+    {
+      gcc_unreachable ();
+    }
+
+  unsigned int coefficient = 1;
+  machine_mode *arg_modes = (machine_mode *)xmalloc (
+      sizeof (machine_mode) * m_target_arg_patterns.arg_len);
+  vector_arg_modes &arg_modes_info =
+      *(vector_arg_modes *)xmalloc (sizeof (vector_arg_modes));
+  arg_modes_info.arg_len = m_target_arg_patterns.arg_len;
+  arg_modes_info.arg_list = arg_modes;
+  arg_modes_info.arg_extensions = m_required_extensions;
+
+  // set mode for iter args first
+  for (unsigned int i = 0; i < m_iter_arg_cnt; i++)
+    {
+      // get the iter arg's attribute length
+      unsigned int arg_attr_len =
+          m_target_arg_patterns.arg_list[m_iter_arg_idx_list[i]]->attr_len;
+      vector_mode_attr target_mode_attr =
+          m_target_arg_patterns.arg_list[m_iter_arg_idx_list[i]]
+              ->attr_list[(iter_idx / coefficient) % arg_attr_len];
+      arg_modes[m_iter_arg_idx_list[i]] = target_mode_attr.mode;
+      arg_modes_info.arg_extensions |= target_mode_attr.extension;
+      coefficient *= arg_attr_len;
+    }
+
+  // set mode for attr args
+  for (unsigned int i = 0; i < m_target_arg_patterns.arg_len; i++)
+    {
+      // skip iter args
+      if (m_target_arg_patterns.target_op_list[i] < 0)
+        continue;
+
+      machine_mode iter_mode =
+          arg_modes[m_target_arg_patterns.target_op_list[i]];
+      vector_mode_attr_list *arg_attr_list = m_target_arg_patterns.arg_list[i];
+      bool attr_mode_hit = false;
+
+      for (unsigned int j = 0; j < arg_attr_list->attr_len; j++)
+        {
+          if (iter_mode == arg_attr_list->attr_list[j].mode)
+            {
+              attr_mode_hit = true;
+              arg_modes[i] = arg_attr_list->attr_list[j].attr;
+              arg_modes_info.arg_extensions |=
+                  arg_attr_list->attr_list[j].extension;
+            }
+        }
+
+      // one argment mode NOT hit, that means maybe NO mode is valid for
+      // iter_idx
+      if (!attr_mode_hit)
+        {
+          // set arg_len to 0 to skip this arg pattern
+          arg_modes_info.arg_len = 0;
+          break;
+        }
+    }
+
+  return arg_modes_info;
+}
+
+void
+function_builder::register_function ()
+{
+  for (unsigned iter_idx = 0; iter_idx < m_iter_idx_cnt; iter_idx++)
+    {
+      vector_arg_modes &arg_modes = get_arg_modes_by_iter_idx (iter_idx);
+      
+      bool skip_p = false;
+      if (arg_modes.arg_len == 0)
+        skip_p = true;
+        
+      if (FLOAT_MODE_P (arg_modes.arg_list[0]) && 
+        get_data_type_list ()[0] == DT_unsigned)
+        skip_p = true;
+
+      if (skip_p)
+        {
+          free (arg_modes.arg_list);
+          free (&arg_modes);
+          continue;
+        }
+      for (unsigned pred = 1; pred < NUM_PREDS; pred <<= 1)
+        {
+          if ((m_target_preds & pred) == 0)
+            continue;
+
+          for (unsigned op_type = 1; op_type < NUM_OP; op_type <<= 1)
+            {
+              if ((m_target_op_types & op_type) == 0)
+                continue;
+
+              function_instance instance (
+                  this, m_base_name, arg_modes,
+                  (enum predication_index) (m_target_preds & pred),
+                  (enum operation_index) (m_target_op_types & op_type));
+              build_one (instance);
+            }
+        }
+    }
+}
+
+} // end namespace riscv_vector
+
+using namespace riscv_vector;
+
+inline void
+gt_ggc_mx (function_instance *)
+{
+}
+
+inline void
+gt_pch_nx (function_instance *)
+{
+}
+
+inline void
+gt_pch_nx (function_instance *, gt_pointer_operator, void *)
+{
+}
+
+#include "gt-riscv-vector-builtins-functions.h"
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
new file mode 100644
index 00000000000..f6161012813
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -0,0 +1,34 @@
+/* Intrinsic macros for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+   Based on MIPS target for GNU compiler.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#ifndef DEF_RVV_FUNCTION
+#define DEF_RVV_FUNCTION(A, B, C, D, E, F, G)
+#endif
+#ifndef VITER
+#define VITER(A)
+#endif
+#ifndef VATTR
+#define VATTR(A, B)
+#endif
+
+#undef DEF_RVV_FUNCTION
+#undef VITER
+#undef VATTR
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.h b/gcc/config/riscv/riscv-vector-builtins-functions.h
new file mode 100644
index 00000000000..1b769743857
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.h
@@ -0,0 +1,491 @@
+/* Intrinsic definitions for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H
+#define GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "cgraph.h"
+#include "diagnostic.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "varasm.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "stor-layout.h"
+#include "regs.h"
+#include "alias.h"
+#include "gimple-fold.h"
+#include "langhooks.h"
+#include "stringpool.h"
+#include "attribs.h"
+#include "tree-pass.h"
+#include "tree-vrp.h"
+#include "tree-ssanames.h"
+#include "tree-ssa-operands.h"
+#include "tree-phinodes.h"
+#include "targhooks.h"
+#include "langhooks-def.h"
+#include "riscv-vector.h"
+#include <string>
+
+namespace riscv_vector
+{
+  
+/* The macro defines the maximum length of name string. */
+static const unsigned int NAME_MAXLEN = 32;
+/* The macro defines the maxmum number of tuple. For 
+   RISC-V 'V' Extension, the maxmum tuple number is 8. 
+   TODO: We will support tuple type for segment instructions later. */
+static const unsigned int MAX_TUPLE_NUM = 1;
+
+/* Describes the various of data_type. 
+   Used by riscv-vector-builtins-iterators.def. */
+enum data_type_index
+{
+  /* signed data. */
+  DT_signed,
+  /* unsigned data. */
+  DT_unsigned,
+  /* signed data pointer. */
+  DT_ptr,
+  /* unsigned data pointer. */
+  DT_uptr,
+  /* const signed data pointer. */
+  DT_c_ptr,
+  /* const unsigned data pointer. */
+  DT_c_uptr,
+};
+
+/* Describes the various uses of a governing predicate.
+   Used by riscv-vector-builtins-iterators.def.  */
+enum predication_index
+{
+  /* No governing predicate is present.  */
+  PRED_none = 1 << 0,
+  /* tail agnostic, ignore mask policy */
+  PRED_ta = 1 << 3,
+  /* tail undisturbed, ignore mask policy */
+  PRED_tu = 1 << 4,
+  /* mask agnostic, ignore tail policy */
+  PRED_ma = 1 << 5,
+  /* mask undisturbed, ignore tail policy */
+  PRED_mu = 1 << 6,
+  /* mask and tail both agnostic */
+  PRED_tama = 1 << 7,
+  /* mask undisturbed and tail agnostic */
+  PRED_tamu = 1 << 8,
+  /* mask agnostic and tail undisturbed */
+  PRED_tuma = 1 << 9,
+  /* mask and tail both undisturbed */
+  PRED_tumu = 1 << 10,
+  /* No governing predicate is present.  */
+  PRED_void = 1 << 11,
+  /* mask and tail both undisturbed */
+  PRED_m = 1 << 12,
+  /* tail agnostic, ignore mask policy */
+  PRED_tam = 1 << 13,
+  /* tail undisturbed, ignore mask policy */
+  PRED_tum = 1 << 14,
+
+  NUM_PREDS
+};
+
+/* Describes the various intrinsic types.  */
+enum intrinsic_pattern
+{
+  /* other intrinsic */
+  PAT_none = 1 << 0,
+  PAT_mask = 1 << 1,
+  PAT_tail = 1 << 2,
+  PAT_dest = 1 << 3,
+  PAT_void_dest = 1 << 4,
+  PAT_ignore_mask_policy = 1 << 5,
+  PAT_ignore_tail_policy = 1 << 6,
+  PAT_ignore_policy = 1 << 7,
+  PAT_merge = 1 << 8,
+};
+
+/* Describes the various operation types.  */
+enum operation_index
+{
+  OP_none = 1 << 0,
+  OP_vv = 1 << 1,
+  OP_vx = 1 << 2,
+  OP_v = 1 << 3,
+  OP_wv = 1 << 4,
+  OP_wx = 1 << 5,
+  OP_x_x_v = 1 << 6,
+  OP_vf2 = 1 << 7,
+  OP_vf4 = 1 << 8,
+  OP_vf8 = 1 << 9,
+  OP_vvm = 1 << 10,
+  OP_vxm = 1 << 11,
+  OP_x_x_w = 1 << 12,
+  OP_v_v = 1 << 13,
+  OP_v_x = 1 << 14,
+  OP_vs = 1 << 15,
+  OP_mm = 1 << 16,
+  OP_m = 1 << 17,
+  OP_vf = 1 << 18,
+  OP_vm = 1 << 19,
+  OP_wf = 1 << 20,
+  OP_vfm = 1 << 21,
+  OP_v_f = 1 << 22,
+  NUM_OP
+};
+
+/* Describe indexed-ordered or indexed-unordered load store.  */
+enum indexed_mode
+{
+  INDEXED_u,
+  INDEXED_o,
+};
+
+/* Describe vector policy.  */
+enum vector_policy
+{
+  undisturbed = 0,
+  agnostic = 1,
+  any = 2,
+};
+
+/* Enumerates the VECTOR (data) vector types, together called
+   "vector types" for brevity.  */
+enum vector_type_index
+{
+#define DEF_RVV_TYPE(ELEM_TYPE, NODE) VECTOR_TYPE_##ELEM_TYPE,
+#include "riscv-vector-builtins.def"
+  NUM_VECTOR_TYPES
+#undef DEF_RVV_TYPE
+};
+
+struct vector_vlmul_info
+{
+  enum vlmul_field_enum vlmul;
+  const char *suffix;
+  const char *boolnum;
+};
+
+struct vector_type_info
+{
+  enum vector_type_index type;
+  const char *elem_name;
+};
+
+// for function arg mode infomation, include return type
+struct vector_mode_attr
+{
+  machine_mode mode;
+  machine_mode attr;
+  // the extension like TARGET_VECTOR
+  uint64_t extension;
+};
+
+// the total variable pack for function arg mode infomation, include return
+// type
+struct vector_mode_attr_list
+{
+  unsigned int attr_len;
+  vector_mode_attr *attr_list;
+};
+
+// for VATTR(OP, MODE_ATTR)
+struct vector_arg_attr_info
+{
+  int target_op;
+  enum data_type_index dt;
+  vector_mode_attr_list *mode_attr_list;
+};
+
+struct vector_arg_all_modes
+{
+  unsigned int arg_len;
+  data_type_index *dt_list;
+  int *target_op_list;
+  // arg_list[0] is always return type
+  vector_mode_attr_list **arg_list;
+};
+
+struct vector_arg_modes
+{
+  uint64_t arg_extensions;
+  unsigned int arg_len;
+  // arg_list[0] is always return type
+  machine_mode *arg_list;
+};
+
+constexpr unsigned int
+get_vma_vta (vector_policy vma, vector_policy vta)
+{
+  return (vma << 2) | vta;
+}
+
+constexpr vector_policy
+get_vma (unsigned int vma_vta)
+{
+  return (vector_policy)((vma_vta >> 2) & 0b11);
+}
+
+constexpr vector_policy
+get_vta (unsigned int vma_vta)
+{
+  return (vector_policy)(vma_vta & 0b11);
+}
+
+const unsigned int tama_policy = get_vma_vta(vector_policy::agnostic, vector_policy::agnostic);
+const unsigned int tamu_policy = get_vma_vta(vector_policy::undisturbed, vector_policy::agnostic);
+const unsigned int tuma_policy = get_vma_vta(vector_policy::agnostic, vector_policy::undisturbed);
+const unsigned int tumu_policy = get_vma_vta(vector_policy::undisturbed, vector_policy::undisturbed);
+const unsigned int ta_policy = get_vma_vta(vector_policy::any, vector_policy::agnostic);
+const unsigned int tu_policy = get_vma_vta(vector_policy::any, vector_policy::undisturbed);
+const unsigned int ma_policy = get_vma_vta(vector_policy::agnostic, vector_policy::any);
+const unsigned int mu_policy = get_vma_vta(vector_policy::undisturbed, vector_policy::any);
+const unsigned int any_policy = get_vma_vta(vector_policy::any, vector_policy::any);
+
+inline rtx
+gen_tama_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, tama_policy);
+}
+
+inline rtx
+gen_tamu_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, tamu_policy);
+}
+
+inline rtx
+gen_tuma_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, tuma_policy);
+}
+
+inline rtx
+gen_tumu_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, tumu_policy);
+}
+
+inline rtx
+gen_ta_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, ta_policy);
+}
+
+inline rtx
+gen_tu_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, tu_policy);
+}
+
+inline rtx
+gen_ma_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, ma_policy);
+}
+
+inline rtx
+gen_mu_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, mu_policy);
+}
+
+inline rtx
+gen_any_policy ()
+{
+  return gen_rtx_CONST_INT (Pmode, any_policy);
+}
+
+class function_builder;
+
+class GTY ((user)) function_instance
+{
+public:
+  function_instance (function_builder *, const char *, vector_arg_modes &,
+                     enum predication_index, enum operation_index);
+  function_instance (const char *__name);
+  ~function_instance ();
+
+  bool operator== (const function_instance &) const;
+  bool operator!= (const function_instance &) const;
+
+  hashval_t hash () const;
+  bool check (location_t, tree, tree, unsigned int, tree *) const;
+  unsigned int call_properties () const;
+  bool reads_global_state_p () const;
+  bool modifies_global_state_p () const;
+  bool could_trap_p () const;
+
+  const char *get_base_name () const;
+  vector_arg_modes get_arg_pattern () const;
+  enum predication_index get_pred () const;
+  unsigned int get_vma_vta () const;
+  enum operation_index get_operation () const;
+  enum data_type_index *get_data_type_list () const;
+  function_builder *builder () const;
+  
+  char function_name[NAME_MAXLEN];
+
+private:
+  function_builder *m_builder;
+  const char *m_base_name;
+  vector_arg_modes m_target_arg_pattern;
+  enum predication_index m_target_pred;
+  enum operation_index m_target_operation;
+};
+
+/* Describes a function decl.  */
+class GTY (()) registered_function
+{
+public:
+  function_instance GTY ((skip)) instance;
+
+  /* The decl itself.  */
+  tree GTY ((skip)) decl;
+
+  /* The architecture extensions that the function requires, as a set of
+     RISCV_TARGET_* flags.  */
+  uint64_t required_extensions;
+  
+  /* True if the decl represents an overloaded function that needs to be
+     resolved.  */
+  bool overloaded_p;
+};
+
+/* A class for building and registering function decls.  */
+class function_builder
+{
+public:
+  function_builder (const char *, vector_arg_all_modes &, uint64_t,
+                    uint64_t, uint64_t, const unsigned int);
+
+  virtual ~function_builder ();
+
+  void register_function ();
+
+  /* Try to fold the given gimple call.  Return the new gimple statement
+     on success, otherwise return null.  */
+  virtual gimple * fold (const function_instance &, gimple_stmt_iterator *, gcall *) const;
+
+  /* Expand the given call into rtl.  Return the result of the function,
+     or an arbitrary value if the function doesn't return a result.  */
+  virtual rtx expand (const function_instance &, tree, rtx) const = 0;
+
+  rtx
+  expand_builtin_insn (enum insn_code, tree, rtx,
+                       const function_instance &) const;
+
+  virtual tree get_return_type (const function_instance &) const;
+
+  virtual tree get_mask_type (tree, const function_instance &, const vec<tree> &) const;
+
+  virtual void get_argument_types (const function_instance &,
+                                   vec<tree> &) const;
+
+  virtual size_t get_dest_arguments_length () const;
+  
+  uint64_t get_pattern () const;
+  /* check if need add mask operand for corresponding rtl */
+  bool need_mask_operand_p () const;
+  /* check if need add dest operand for corresponding rtl */
+  bool need_dest_operand_p () const;
+  /* check if has mask arg for corresponding function decl */
+  bool has_mask_arg_p (enum predication_index) const;
+  /* check if has dest arg for corresponding function decl */
+  virtual bool has_dest_arg_p (enum predication_index) const;
+  unsigned int get_policy (enum predication_index) const;
+  /* get parameter position of mask arg */
+  virtual size_t get_position_of_mask_arg (enum predication_index) const;
+  /* get parameter position of dest arg */
+  virtual size_t get_position_of_dest_arg (enum predication_index) const;
+
+  void apply_predication (const function_instance &, tree, vec<tree> &) const;
+
+  virtual unsigned int call_properties () const;
+
+  vector_arg_modes &get_arg_modes_by_iter_idx (unsigned int) const;
+
+  enum data_type_index *get_data_type_list () const;
+
+  bool check_required_extensions (location_t, tree, uint64_t) const;
+  virtual char * assemble_name (function_instance &);
+  void append_name (const char *);
+  char *finish_name ();
+  
+  /* Return true if the function can be overloaded. */ 
+  virtual bool can_be_overloaded_p (const function_instance &) const;
+  
+private:
+  void add_unique_function (function_instance &, tree, vec<tree> &);
+  void build_one (function_instance &);
+  tree get_attributes (const function_instance &) const;
+
+  registered_function &add_function (const function_instance &, const char *,
+                                     tree, tree, bool, bool) const;
+
+  /* The base name, as a string.  */
+  const char *m_base_name;
+  vector_arg_all_modes m_target_arg_patterns;
+  uint64_t m_target_pattern;
+  uint64_t m_target_preds;
+  uint64_t m_target_op_types;
+  uint64_t m_required_extensions;
+
+  unsigned int m_iter_idx_cnt;
+  unsigned int m_iter_arg_cnt;
+  unsigned int *m_iter_arg_idx_list;
+
+  /* True if we should create a separate decl for each instance of an
+     overloaded function, instead of using function_builder.  */
+  bool m_direct_overloads;
+  
+  /* Used for building up function names.  */
+  obstack m_string_obstack;
+};
+
+/* Hash traits for registered_function.  */
+struct registered_function_hasher : nofree_ptr_hash<registered_function>
+{
+  typedef function_instance compare_type;
+
+  static hashval_t hash (value_type);
+  static bool equal (value_type, const compare_type &);
+};
+
+} // namespace riscv_vector
+
+#endif // end GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-iterators.def b/gcc/config/riscv/riscv-vector-builtins-iterators.def
new file mode 100644
index 00000000000..ac0c37e12d4
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins-iterators.def
@@ -0,0 +1,12 @@
+/* Do not modify this file, it auto gen by md-parser script */
+#ifndef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
+#define DEF_RISCV_ARG_MODE_ATTR_VARIABLE(A, B)
+#endif
+
+#ifndef DEF_RISCV_ARG_MODE_ATTR
+#define DEF_RISCV_ARG_MODE_ATTR(A, B, C, D, E)
+#endif
+
+
+#undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
+#undef DEF_RISCV_ARG_MODE_ATTR
diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
new file mode 100644
index 00000000000..7ea07a24b5b
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -0,0 +1,266 @@
+/* Builtins implementation for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#define IN_TARGET_CODE 1
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "tm_p.h"
+#include "memmodel.h"
+#include "insn-codes.h"
+#include "optabs.h"
+#include "recog.h"
+#include "cgraph.h"
+#include "diagnostic.h"
+#include "expr.h"
+#include "basic-block.h"
+#include "function.h"
+#include "fold-const.h"
+#include "varasm.h"
+#include "gimple.h"
+#include "gimple-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "emit-rtl.h"
+#include "tree-vector-builder.h"
+#include "stor-layout.h"
+#include "regs.h"
+#include "alias.h"
+#include "gimple-fold.h"
+#include "langhooks.h"
+#include "stringpool.h"
+#include "attribs.h"
+#include "tree-pass.h"
+#include "tree-vrp.h"
+#include "tree-ssanames.h"
+#include "tree-ssa-operands.h"
+#include "tree-phinodes.h"
+#include "targhooks.h"
+#include "langhooks-def.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector-builtins-functions.h"
+#include "riscv-vector.h"
+namespace riscv_vector
+{
+
+/* The same vlmul doesn't mean use the same mask,
+   this is used as save codes.
+   for example: i32m8 use vbool4_t i8m8 use vbool1_t. */
+static CONSTEXPR const vector_vlmul_info vector_vlmuls[] =
+{
+  { VLMUL_FIELD_101, "mf8", "64" }, { VLMUL_FIELD_110, "mf4", "32" },
+  { VLMUL_FIELD_111, "mf2", "16" }, { VLMUL_FIELD_000, "m1", "8" },
+  { VLMUL_FIELD_001, "m2", "4" }, { VLMUL_FIELD_010, "m4", "2" },
+  { VLMUL_FIELD_011, "m8", "1" },
+};
+
+static CONSTEXPR const vector_type_info vector_type_infos[] =
+{
+#define DEF_RVV_TYPE(ELEM_TYPE, NODE) { VECTOR_TYPE_##ELEM_TYPE, #ELEM_TYPE },
+#include "riscv-vector-builtins.def"
+#undef DEF_RVV_TYPE
+};
+
+static GTY(()) tree internal_vector_types[NUM_VECTOR_TYPES + 1][MAX_VLMUL_FIELD];
+
+/* Same, but with the riscv_vector.h "v..._t" name.  */
+GTY(()) tree vector_types[MAX_TUPLE_NUM][NUM_VECTOR_TYPES + 1][MAX_VLMUL_FIELD];
+/* Same, but with the riscv_vector.h "v..._t *" name.  */
+GTY(()) tree vector_pointer_types[NUM_VECTOR_TYPES + 1][MAX_VLMUL_FIELD];
+/* The scalar type associated with each vector type.  */
+GTY(()) tree scalar_types[NUM_VECTOR_TYPES];
+/* The scalar pointer type associated with each vector type.  */
+GTY(()) tree scalar_pointer_types[NUM_VECTOR_TYPES];
+/* The const scalar pointer type associated with each vector type.  */
+GTY(()) tree const_scalar_pointer_types[NUM_VECTOR_TYPES];
+
+/* All registered function decls, hashed on the function_instance
+   that they implement.  This is used for looking up implementations of
+   overloaded functions.  */
+hash_table<registered_function_hasher> *function_table;
+
+static riscv_vector::function_builder** all_vector_functions;
+
+/* The list of all registered function decls, indexed by code.  */
+vec<registered_function *, va_gc> *registered_functions;
+
+static unsigned int NUM_INSN_FUNC;
+
+static void init_def_variables ();
+
+/* Initialize all compiler built-ins related to RVV that should be
+   defined at start-up.  */
+void
+init_builtins ()
+{
+  if (!TARGET_VECTOR)
+    return;
+}
+
+/* Return the function decl with RVV function subcode CODE, or error_mark_node
+   if no such function exists.  */
+tree
+builtin_decl (unsigned int code, bool)
+{
+  if (code >= vec_safe_length (registered_functions))
+    return error_mark_node;
+
+  return (*registered_functions)[code]->decl;
+}
+
+/* Perform any semantic checks needed for a call to the RVV function
+   with subcode CODE, such as testing for integer constant expressions.
+   The call occurs at location LOCATION and has NARGS arguments,
+   given by ARGS.  FNDECL is the original function decl, before
+   overload resolution.
+
+   Return true if the call is valid, otherwise report a suitable error.  */
+bool
+check_builtin_call (location_t location, vec<location_t>, unsigned int code,
+                    tree fndecl, unsigned int nargs, tree *args)
+{
+  const registered_function &rfn = *(*registered_functions)[code];
+  function_builder *builder = rfn.instance.builder ();
+
+  if (!builder->check_required_extensions (
+          location, rfn.decl, rfn.instance.get_arg_pattern ().arg_extensions))
+    return false;
+
+  return rfn.instance.check (location, fndecl, TREE_TYPE (rfn.decl), nargs,
+                             args);
+}
+
+/* Attempt to fold STMT, given that it's a call to the RVV function
+   with subcode CODE.  Return the new statement on success and null
+   on failure.  Insert any other new statements at GSI.  */
+gimple *
+gimple_fold_builtin (unsigned int code, gimple_stmt_iterator *gsi, gcall *stmt)
+{
+  registered_function &rfn = *(*registered_functions)[code];
+  function_builder *builder = rfn.instance.builder ();
+  return builder->fold (rfn.instance, gsi, stmt);
+}
+
+/* Expand a call to the RVV function with subcode CODE.  EXP is the call
+   expression and TARGET is the preferred location for the result.
+   Return the value of the lhs.  */
+rtx
+expand_builtin (unsigned int code, tree exp, rtx target)
+{
+  registered_function &rfn = *(*registered_functions)[code];
+  function_builder *builder = rfn.instance.builder ();
+
+  if (!builder->check_required_extensions (
+          EXPR_LOCATION (exp), rfn.decl,
+          rfn.instance.get_arg_pattern ().arg_extensions))
+    return target;
+
+  return builder->expand (rfn.instance, exp, target);
+}
+
+riscv_vector::vector_arg_all_modes &
+get_vector_arg_all_patterns (unsigned int len,
+                             riscv_vector::vector_arg_attr_info attr, ...)
+{
+  riscv_vector::vector_arg_all_modes &patterns =
+      *ggc_alloc<riscv_vector::vector_arg_all_modes> ();
+  patterns.arg_len = len;
+  patterns.arg_list = (riscv_vector::vector_mode_attr_list **)xmalloc (
+      len * sizeof (riscv_vector::vector_mode_attr_list *));
+  patterns.target_op_list = (int *)xmalloc (len * sizeof (int));
+  patterns.dt_list =
+      (enum data_type_index *)xmalloc (len * sizeof (enum data_type_index));
+
+  unsigned int arg_idx = 0;
+  va_list arg_ptr;
+  riscv_vector::vector_arg_attr_info next_attr = attr;
+
+  va_start (arg_ptr, attr);
+
+  while (arg_idx < len)
+    {
+      patterns.dt_list[arg_idx] = next_attr.dt;
+      patterns.arg_list[arg_idx] = next_attr.mode_attr_list;
+      patterns.target_op_list[arg_idx] = next_attr.target_op;
+      next_attr = va_arg (arg_ptr, riscv_vector::vector_arg_attr_info);
+      arg_idx++;
+    }
+
+  va_end (arg_ptr);
+
+  return patterns;
+}
+
+static vector_mode_attr_list vector_mode_attr_list_list[vector_arg_mode_category_num];
+
+static void
+init_def_variables ()
+{
+
+/* define vector arg mode category */
+#define VVAR(NAME) vector_mode_attr_list_list[vector_mode_attr_##NAME]
+#define VITER(NAME, SIGN) riscv_vector::vector_arg_attr_info{-1, DT_##SIGN, &VVAR(NAME)}
+#define VATTR(OP, NAME, SIGN) riscv_vector::vector_arg_attr_info{OP, DT_##SIGN, &VVAR(NAME)}
+#define DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VARIABLE_NAME, ELEM_CNT) \
+  VVAR(VARIABLE_NAME) = {ELEM_CNT, \
+    (riscv_vector::vector_mode_attr*)xmalloc(ELEM_CNT * sizeof(vector_mode_attr))};
+#include "riscv-vector-builtins-iterators.def"
+#undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
+
+/* define every vector arg mode in category  */
+#define DEF_RISCV_ARG_MODE_ATTR(VARIABLE_NAME, INDEX, MODE, ATTR_MODE,        \
+				CONDITION)                                    \
+  VVAR (VARIABLE_NAME).attr_list[INDEX]                                       \
+      = { MODE##mode, ATTR_MODE##mode, RISCV_##CONDITION };
+#include "riscv-vector-builtins-iterators.def"
+#undef DEF_RISCV_ARG_MODE_ATTR
+
+  /* count the number of intrinsic functions */
+  NUM_INSN_FUNC = 0;
+#define DEF_RVV_FUNCTION(BASE_NAME, CLASS_NAME, ARG_PATTERN, INTRNSIC_PATTER, PREDS, OP_TYPES) \
+  NUM_INSN_FUNC++;
+#include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+
+  all_vector_functions = (riscv_vector::function_builder **)xmalloc (
+			   sizeof (riscv_vector::function_builder *) * NUM_INSN_FUNC);
+
+  unsigned int func_idx = 0;
+#define VITER(NAME, SIGN)                                                     \
+  riscv_vector::vector_arg_attr_info { -1, DT_##SIGN, &VVAR (NAME) }
+#define VATTR(OP, NAME, SIGN)                                                 \
+  riscv_vector::vector_arg_attr_info { OP, DT_##SIGN, &VVAR (NAME) }
+#define DEF_RVV_FUNCTION(BASE_NAME, CLASS_NAME, ARG_PATTERN, INTRNSIC_PATTER, PREDS, OP_TYPES) \
+  all_vector_functions[func_idx++] = new riscv_vector::CLASS_NAME (           \
+      #BASE_NAME, get_vector_arg_all_patterns ARG_PATTERN, INTRNSIC_PATTER, PREDS, OP_TYPES,   \
+      (REQUIRED_EXTENSIONS));
+#include "riscv-vector-builtins-functions.def"
+#undef DEF_RVV_FUNCTION
+}
+
+} //end namespace riscv_vector
+
+using namespace riscv_vector;
+
+#include "gt-riscv-vector-builtins.h"
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins.def b/gcc/config/riscv/riscv-vector-builtins.def
new file mode 100644
index 00000000000..805f9d725dc
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins.def
@@ -0,0 +1,37 @@
+/* Builtins macros for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef DEF_RVV_TYPE
+#define DEF_RVV_TYPE(A, B)
+#endif
+
+DEF_RVV_TYPE (bool, boolean)
+DEF_RVV_TYPE (float32, float)
+DEF_RVV_TYPE (float64, double)
+DEF_RVV_TYPE (int8, int8)
+DEF_RVV_TYPE (int16, int16)
+DEF_RVV_TYPE (int32, int32)
+DEF_RVV_TYPE (int64, int64)
+DEF_RVV_TYPE (uint8, unsigned_int8)
+DEF_RVV_TYPE (uint16, unsigned_int16)
+DEF_RVV_TYPE (uint32, unsigned_int32)
+DEF_RVV_TYPE (uint64, unsigned_int64)
+
+#undef DEF_RVV_TYPE
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
new file mode 100644
index 00000000000..6bba4c90c3a
--- /dev/null
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -0,0 +1,59 @@
+/* Builtins definitions for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3, or (at your option)
+   any later version.
+
+   GCC is distributed in the hope that it will be useful, but
+   WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RISCV_VECTOR_BUILTINS_H
+#define GCC_RISCV_VECTOR_BUILTINS_H
+
+#include <vector>
+#include "riscv-vector-builtins-functions.h"
+
+namespace riscv_vector
+{
+  
+/* global share variables */
+
+static const unsigned int RISCV_TARGET_ANY = 0;
+static const unsigned int RISCV_TARGET_VECTOR = 1;
+static const unsigned int RISCV_TARGET_HARD_FLOAT = 1 << 2;
+static const unsigned int RISCV_TARGET_DOUBLE_FLOAT = 1 << 3;
+
+enum vector_arg_mode_category {
+#define VVAR(NAME) vector_mode_attr_##NAME
+#define DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VARIABLE_NAME, ELEM_CNT) \
+  VVAR(VARIABLE_NAME),
+#include "riscv-vector-builtins-iterators.def"
+#undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
+#undef VVAR
+  /* the number of arg mode category */
+  vector_arg_mode_category_num
+};
+
+void init_builtins ();
+void handle_pragma_vector ();
+tree builtin_decl (unsigned, bool);
+gimple *gimple_fold_builtin (unsigned int, gimple_stmt_iterator *, gcall *);
+rtx expand_builtin (unsigned int, tree, rtx);
+bool check_builtin_call (location_t, vec<location_t>, unsigned int,
+			 tree, unsigned int, tree *);
+machine_mode vector_builtin_mode (scalar_mode, enum vlmul_field_enum);
+
+} // end namespace riscv_vector
+
+#endif
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index e315b5d2cac..87dc6739f4f 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -226,4 +226,21 @@ rvv_regsize (machine_mode mode)
     }
 	
 	return 1;
+}
+
+/* Get related mask mode for a RVV vector mode.  */
+
+opt_machine_mode
+rvv_get_mask_mode (machine_mode mode)
+{
+  machine_mode mask_mode;
+  int nf = 1;
+
+  FOR_EACH_MODE_IN_CLASS (mask_mode, MODE_VECTOR_BOOL)
+    if (GET_MODE_INNER (mask_mode) == BImode
+      && known_eq (GET_MODE_NUNITS (mask_mode) * nf,
+        GET_MODE_NUNITS (mode))
+      && rvv_mask_mode_p (mask_mode))
+      return mask_mode;
+  return default_get_mask_mode (mode);
 }
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector.h b/gcc/config/riscv/riscv-vector.h
index b8d77ddb195..62df507e7df 100644
--- a/gcc/config/riscv/riscv-vector.h
+++ b/gcc/config/riscv/riscv-vector.h
@@ -25,4 +25,5 @@ bool rvv_legitimate_poly_int_p (rtx);
 unsigned int rvv_offset_temporaries (bool, poly_int64);
 vlmul_field_enum rvv_classify_vlmul_field (machine_mode);
 int rvv_regsize (machine_mode);
+opt_machine_mode rvv_get_mask_mode (machine_mode);
 #endif // GCC_RISCV_VECTOR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 37d8f1271d4..b82a38da7c9 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -4967,7 +4967,16 @@ riscv_class_max_nregs (reg_class_t rclass, machine_mode mode)
 
   if (reg_class_subset_p (rclass, GR_REGS))
     return riscv_hard_regno_nregs (GP_REG_FIRST, mode);
+		
+  if (reg_class_subset_p (V_REGS, rclass))
+    return riscv_hard_regno_nregs (V_REG_FIRST, mode);
+
+  if (reg_class_subset_p (VL_REGS, rclass))
+    return 1;
 
+  if (reg_class_subset_p (VTYPE_REGS, rclass))
+    return 1;
+		
   return 0;
 }
 
@@ -5308,6 +5317,15 @@ riscv_conditional_register_usage (void)
       for (int regno = FP_REG_FIRST; regno <= FP_REG_LAST; regno++)
 	call_used_regs[regno] = 1;
     }
+
+  if (!TARGET_VECTOR)
+    {
+      for (int regno = V_REG_FIRST; regno <= V_REG_LAST; regno++)
+	call_used_regs[regno] = 1;
+	
+      fixed_regs[VTYPE_REGNUM] = call_used_regs[VTYPE_REGNUM] = 1;
+      fixed_regs[VL_REGNUM] = call_used_regs[VL_REGNUM] = 1;
+    }
 }
 
 /* Return a register priority for hard reg REGNO.  */
@@ -5903,6 +5921,9 @@ riscv_asan_shadow_offset (void)
 #undef TARGET_BUILTIN_DECL
 #define TARGET_BUILTIN_DECL riscv_builtin_decl
 
+#undef TARGET_GIMPLE_FOLD_BUILTIN
+#define TARGET_GIMPLE_FOLD_BUILTIN riscv_gimple_fold_builtin
+
 #undef TARGET_EXPAND_BUILTIN
 #define TARGET_EXPAND_BUILTIN riscv_expand_builtin
 
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index b5abf9c45d0..9b0da73f3b5 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -27,6 +27,42 @@ riscv-vector.o: $(srcdir)/config/riscv/riscv-vector.cc
 	$(COMPILE) $<
 	$(POSTCOMPILE)
 
+riscv-vector-builtins-functions.o: \
+  $(srcdir)/config/riscv/riscv-vector-builtins-functions.cc \
+  $(srcdir)/config/riscv/riscv-vector-builtins.def \
+  $(srcdir)/config/riscv/riscv-vector-builtins-functions.def \
+  $(srcdir)/config/riscv/vector-iterators.md \
+  $(srcdir)/config/riscv/md-parser  \
+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) $(DIAGNOSTIC_H) \
+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+  stor-layout.h $(REG_H) alias.h gimple-fold.h langhooks.h \
+  stringpool.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-functions.h
+	python3 $(srcdir)/config/riscv/md-parser \
+		riscv-vector-builtins-iterators.def vector-iterators.md && \
+	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+		$(srcdir)/config/riscv/riscv-vector-builtins-functions.cc
+
+riscv-vector-builtins.o: \
+  $(srcdir)/config/riscv/riscv-vector-builtins.cc \
+  $(srcdir)/config/riscv/riscv-vector-builtins.def \
+  $(srcdir)/config/riscv/riscv-vector-builtins-functions.def \
+  $(srcdir)/config/riscv/vector-iterators.md \
+  $(srcdir)/config/riscv/md-parser  \
+  $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \
+  $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) $(DIAGNOSTIC_H) \
+  $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \
+  gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \
+  stor-layout.h $(REG_H) alias.h gimple-fold.h langhooks.h \
+  stringpool.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins.h \
+  $(srcdir)/config/riscv/riscv-vector-builtins-functions.h \
+	riscv-vector-builtins-functions.o
+	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
+		$(srcdir)/config/riscv/riscv-vector-builtins.cc
+
 PASSES_EXTRA += $(srcdir)/config/riscv/riscv-passes.def
 
 $(common_out_file): $(srcdir)/config/riscv/riscv-cores.def \
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
new file mode 100644
index 00000000000..450e20c44ce
--- /dev/null
+++ b/gcc/config/riscv/vector-iterators.md
@@ -0,0 +1,19 @@
+;; Machine description for RISCV architecture.
+;; Copyright (C) 2022-2022 Free Software Foundation, Inc.
+;; Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify it
+;; under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful, but
+;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+;; General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
\ No newline at end of file
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 03/21]   Add RVV datatypes
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
  2022-05-31  8:49 ` [PATCH 01/21] Add RVV modes and support scalable vector juzhe.zhong
  2022-05-31  8:49 ` [PATCH 02/21] Add RVV intrinsic framework juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 04/21] Add RVV intrinsic enable #pragma riscv intrinsic "vector" and introduce RVV header "riscv_vector.h" juzhe.zhong
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-vector-builtins.cc (make_type_sizeless): New function.
        (sizeless_type_p): New function.
        (vector_builtin_mode): New function.
        (vector_legal_vlmul): New function.
        (add_vector_type_attribute): New function.
        (register_general_builtin_types): New function.
        (DEFINE_SCALAR_PTR_TYPE_NODE): New function.
        (register_builtin_types): New function.
        (register_vector_type): New function.
        (handle_pragma_vector): New function.
        (lookup_rvv_type_attribute): New function.
        (builtin_type_p): New function.
        (verify_type_context): New function.
        (mangle_builtin_type): New function.
        * config/riscv/riscv-vector-builtins.h (builtin_type_p): New function.
        (verify_type_context): New function.
        (mangle_builtin_type): New function.
        * config/riscv/riscv.cc (riscv_vector_mode_supported_p): New function.
        (riscv_vector_alignment): New function.
        (riscv_vectorize_preferred_vector_alignment): New function.
        (riscv_simd_vector_alignment_reachable): New function.
        (riscv_builtin_support_vector_misalignment): New function.
        (riscv_compatible_vector_types_p): New function.
        (riscv_verify_type_context): New function.
        (riscv_mangle_type): New function.
        (TARGET_VECTOR_MODE_SUPPORTED_P): New targethook.
        (TARGET_VECTOR_ALIGNMENT): New targethook.
        (TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT): New targethook.
        (TARGET_VECTORIZE_VECTOR_ALIGNMENT_REACHABLE): New targethook.
        (TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT): New targethook.
        (TARGET_COMPATIBLE_VECTOR_TYPES_P): New targethook.
        (TARGET_VERIFY_TYPE_CONTEXT): New targethook.
        (TARGET_MANGLE_TYPE): New targethook.
        
---
 gcc/config/riscv/riscv-vector-builtins.cc | 466 ++++++++++++++++++++++
 gcc/config/riscv/riscv-vector-builtins.h  |   3 +
 gcc/config/riscv/riscv.cc                 | 144 +++++++
 3 files changed, 613 insertions(+)

diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc
index 7ea07a24b5b..ef734572add 100644
--- a/gcc/config/riscv/riscv-vector-builtins.cc
+++ b/gcc/config/riscv/riscv-vector-builtins.cc
@@ -109,6 +109,286 @@ static unsigned int NUM_INSN_FUNC;
 
 static void init_def_variables ();
 
+/* Force TYPE to be a sizeless type.  */
+static void
+make_type_sizeless (tree type)
+{
+  TYPE_ATTRIBUTES (type) = tree_cons (get_identifier ("RVV sizeless type"),
+				      NULL_TREE, TYPE_ATTRIBUTES (type));
+}
+
+/* Return true if TYPE is a sizeless type.  */
+static bool
+sizeless_type_p (const_tree type)
+{
+  if (type == error_mark_node)
+    return NULL_TREE;
+  return lookup_attribute ("RVV sizeless type", TYPE_ATTRIBUTES (type));
+}
+
+machine_mode
+vector_builtin_mode (scalar_mode inner_mode, enum vlmul_field_enum vlmul)
+{
+  switch (inner_mode)
+    {
+    case E_BImode:
+      return vlmul == VLMUL_FIELD_000   ? VNx16BImode
+             : vlmul == VLMUL_FIELD_001 ? VNx32BImode
+             : vlmul == VLMUL_FIELD_010 ? VNx64BImode
+             : vlmul == VLMUL_FIELD_011 ? VNx128BImode
+             : vlmul == VLMUL_FIELD_111 ? VNx8BImode
+             : vlmul == VLMUL_FIELD_110 ? VNx4BImode
+                                       : VNx2BImode;
+
+    case E_QImode:
+      return vlmul == VLMUL_FIELD_000   ? VNx16QImode
+             : vlmul == VLMUL_FIELD_001 ? VNx32QImode
+             : vlmul == VLMUL_FIELD_010 ? VNx64QImode
+             : vlmul == VLMUL_FIELD_011 ? VNx128QImode
+             : vlmul == VLMUL_FIELD_111 ? VNx8QImode
+             : vlmul == VLMUL_FIELD_110 ? VNx4QImode
+                                       : VNx2QImode;
+
+    case E_HImode:
+      if (vlmul == VLMUL_FIELD_101)
+        gcc_unreachable ();
+
+      return vlmul == VLMUL_FIELD_000   ? VNx8HImode
+             : vlmul == VLMUL_FIELD_001 ? VNx16HImode
+             : vlmul == VLMUL_FIELD_010 ? VNx32HImode
+             : vlmul == VLMUL_FIELD_011 ? VNx64HImode
+             : vlmul == VLMUL_FIELD_111 ? VNx4HImode
+                                       : VNx2HImode;
+
+    case E_SImode:
+      if (vlmul == VLMUL_FIELD_101 || vlmul == VLMUL_FIELD_110)
+        gcc_unreachable ();
+
+      return vlmul == VLMUL_FIELD_000   ? VNx4SImode
+             : vlmul == VLMUL_FIELD_001 ? VNx8SImode
+             : vlmul == VLMUL_FIELD_010 ? VNx16SImode
+             : vlmul == VLMUL_FIELD_011 ? VNx32SImode
+                                       : VNx2SImode;
+
+    case E_DImode:
+      if (vlmul == VLMUL_FIELD_101 || vlmul == VLMUL_FIELD_110 ||
+          vlmul == VLMUL_FIELD_111)
+        gcc_unreachable ();
+
+      return vlmul == VLMUL_FIELD_000   ? VNx2DImode
+             : vlmul == VLMUL_FIELD_001 ? VNx4DImode
+             : vlmul == VLMUL_FIELD_010 ? VNx8DImode
+                                       : VNx16DImode;
+
+    case E_SFmode:
+      if (vlmul == VLMUL_FIELD_101 || vlmul == VLMUL_FIELD_110)
+        gcc_unreachable ();
+
+      return vlmul == VLMUL_FIELD_000   ? VNx4SFmode
+             : vlmul == VLMUL_FIELD_001 ? VNx8SFmode
+             : vlmul == VLMUL_FIELD_010 ? VNx16SFmode
+             : vlmul == VLMUL_FIELD_011 ? VNx32SFmode
+                                       : VNx2SFmode;
+
+    case E_DFmode:
+      if (vlmul == VLMUL_FIELD_101 || vlmul == VLMUL_FIELD_110 ||
+          vlmul == VLMUL_FIELD_111)
+        gcc_unreachable ();
+
+      return vlmul == VLMUL_FIELD_000   ? VNx2DFmode
+             : vlmul == VLMUL_FIELD_001 ? VNx4DFmode
+             : vlmul == VLMUL_FIELD_010 ? VNx8DFmode
+                                       : VNx16DFmode;
+
+    default:
+      gcc_unreachable ();
+    }
+
+  gcc_unreachable ();
+}
+
+static bool
+vector_legal_vlmul (scalar_mode inner_mode, enum vlmul_field_enum vlmul)
+{
+  if (vlmul == VLMUL_FIELD_100)
+    return false;
+
+  switch (inner_mode)
+    {
+    case E_HImode:
+      return vlmul != VLMUL_FIELD_101;
+
+    case E_SImode:
+    case E_SFmode:
+      return vlmul != VLMUL_FIELD_101 && vlmul != VLMUL_FIELD_110;
+
+    case E_DImode:
+    case E_DFmode:
+      return vlmul <= VLMUL_FIELD_011;
+
+    default:
+      break;
+    }
+
+  return true;
+}
+
+/* Record that TYPE is an ABI-defined VECTOR type that contains SEW and LMUL
+   information for RVV vector.  MANGLED_NAME, if nonnull, is the ABI-defined
+   mangling of the type.
+ */
+static void
+add_vector_type_attribute (tree type, unsigned int nf, unsigned int sew, unsigned int vlmul,
+			   unsigned int is_bool, const char *mangled_name)
+{
+  tree mangled_name_tree
+    = (mangled_name ? get_identifier (mangled_name) : NULL_TREE);
+
+  tree value = tree_cons (NULL_TREE, mangled_name_tree, NULL_TREE);
+  value = tree_cons (NULL_TREE, size_int (nf), value);
+  value = tree_cons (NULL_TREE, size_int (sew), value);
+  value = tree_cons (NULL_TREE, size_int (vlmul), value);
+  value = tree_cons (NULL_TREE, size_int (is_bool), value);
+  TYPE_ATTRIBUTES (type) = tree_cons (get_identifier ("RVV type"), value,
+				      TYPE_ATTRIBUTES (type));
+}
+
+/* These codes copied from ARM. */
+static void
+register_general_builtin_types (void)
+{
+  scalar_types[VECTOR_TYPE_bool] = boolean_type_node;
+  scalar_types[VECTOR_TYPE_int8] = intQI_type_node;
+  scalar_types[VECTOR_TYPE_uint8] = unsigned_intQI_type_node;
+  scalar_types[VECTOR_TYPE_int16] = intHI_type_node;
+  scalar_types[VECTOR_TYPE_uint16] = unsigned_intHI_type_node;
+
+  if (TARGET_64BIT)
+    {
+      scalar_types[VECTOR_TYPE_int32] = intSI_type_node;
+      scalar_types[VECTOR_TYPE_uint32] = unsigned_intSI_type_node;
+    }
+  else
+    {
+      /* int32_t/uint32_t defined as `long`/`unsigned long` in RV32,
+          but intSI_type_node/unsigned_intSI_type_node is
+          `int` and `unsigned int`, so use long_integer_type_node and
+          long_unsigned_type_node here for type consistent.  */
+      scalar_types[VECTOR_TYPE_int32] = long_integer_type_node;
+      scalar_types[VECTOR_TYPE_uint32] = long_unsigned_type_node;
+    }
+
+  scalar_types[VECTOR_TYPE_int64] = intDI_type_node;
+  scalar_types[VECTOR_TYPE_uint64] = unsigned_intDI_type_node;
+  scalar_types[VECTOR_TYPE_float32] = float_type_node;
+  scalar_types[VECTOR_TYPE_float64] = double_type_node;
+
+  /* Pointer type */
+#define DEFINE_SCALAR_PTR_TYPE_NODE(NBITS)                                     \
+  scalar_pointer_types[VECTOR_TYPE_int##NBITS] =                               \
+      build_pointer_type (scalar_types[VECTOR_TYPE_int##NBITS]);               \
+  scalar_pointer_types[VECTOR_TYPE_uint##NBITS] =                              \
+      build_pointer_type (scalar_types[VECTOR_TYPE_uint##NBITS]);              \
+  const_scalar_pointer_types[VECTOR_TYPE_int##NBITS] = build_pointer_type (    \
+      build_type_variant (scalar_types[VECTOR_TYPE_int##NBITS], 1, 0));        \
+  const_scalar_pointer_types[VECTOR_TYPE_uint##NBITS] = build_pointer_type (   \
+      build_type_variant (scalar_types[VECTOR_TYPE_uint##NBITS], 1, 0));
+
+  DEFINE_SCALAR_PTR_TYPE_NODE (8)
+  DEFINE_SCALAR_PTR_TYPE_NODE (16)
+  DEFINE_SCALAR_PTR_TYPE_NODE (32)
+  DEFINE_SCALAR_PTR_TYPE_NODE (64)
+
+  scalar_pointer_types[VECTOR_TYPE_float32] = float_ptr_type_node;
+  scalar_pointer_types[VECTOR_TYPE_float64] = double_ptr_type_node;
+  const_scalar_pointer_types[VECTOR_TYPE_float32] = build_pointer_type (
+      build_type_variant (scalar_types[VECTOR_TYPE_float32], 1, 0));
+  const_scalar_pointer_types[VECTOR_TYPE_float64] = build_pointer_type (
+      build_type_variant (scalar_types[VECTOR_TYPE_float64], 1, 0));
+}
+
+/* Register the built-in VECTOR ABI types, such as __rvv_int8mf8_t.  */
+static void
+register_builtin_types ()
+{
+  for (unsigned int i = 0; i < NUM_VECTOR_TYPES; ++i)
+    {
+      tree eltype = scalar_types[i];
+      scalar_mode elmode =
+          (eltype == boolean_type_node) ? BImode : SCALAR_TYPE_MODE (eltype);
+
+      for (unsigned int j = 0; j < ARRAY_SIZE (vector_vlmuls); ++j)
+        {
+          if (!vector_legal_vlmul (elmode, vector_vlmuls[j].vlmul))
+            continue;
+
+          char abi_name[NAME_MAXLEN] = {0};
+          char mangled_name[NAME_MAXLEN] = {0};
+          bool is_bool;
+          tree vectype;
+          unsigned int sew = GET_MODE_BITSIZE (elmode);
+          machine_mode mode =
+              vector_builtin_mode (elmode, vector_vlmuls[j].vlmul);
+          
+          /* mask type in RVV.  */
+          vectype = build_vector_type_for_mode (eltype, mode);
+
+          /* NOTE: Reference to 'omp_clause_aligned_alignment' function in
+             omp-low.c. We don't know why we need this protection, it seems
+             to make the buildup of GCC more reliable. */
+          if (TYPE_MODE (vectype) != mode)
+            continue;
+                
+          if (eltype == boolean_type_node)
+            {
+              gcc_assert (VECTOR_MODE_P (TYPE_MODE (vectype)) &&
+                          TYPE_MODE (vectype) == mode &&
+                          TYPE_MODE (vectype) == TYPE_MODE_RAW (vectype) &&
+                          TYPE_ALIGN (vectype) == 8 &&
+                          known_eq (tree_to_poly_uint64 (TYPE_SIZE (vectype)),
+                                    BITS_PER_RISCV_VECTOR));
+              is_bool = true;
+            }
+          else
+            {
+              gcc_assert (VECTOR_MODE_P (TYPE_MODE (vectype)) &&
+                          TYPE_MODE (vectype) == mode &&
+                          TYPE_MODE_RAW (vectype) == mode &&
+                          TYPE_ALIGN (vectype) <= 128 &&
+                          known_eq (tree_to_poly_uint64 (TYPE_SIZE (vectype)),
+                                    GET_MODE_BITSIZE (mode)));
+              is_bool = false;
+            }
+          /* These codes copied from ARM. */
+          /* abi_name and api_name follows vector type implementation in LLVM.
+             Take sew = 8, vlmul = 1/8 for example,
+             abi_name = __rvv_int8mf8_t,
+             api_name = vint8mf8_t.
+             The mangle name follows the rule of aarch64
+             that is "u" + length of (abi_name) + abi_name.
+             So that mangle_name = u15__rvv_int8mf8_t.  */
+          snprintf (abi_name, NAME_MAXLEN, "__rvv_%s%s_t",
+                    vector_type_infos[i].elem_name,
+                    is_bool ? vector_vlmuls[j].boolnum
+                            : vector_vlmuls[j].suffix);
+          snprintf (mangled_name, NAME_MAXLEN, "u%d__rvv_%s%s_t",
+                    (int)strlen (abi_name), vector_type_infos[i].elem_name,
+                    is_bool ? vector_vlmuls[j].boolnum
+                            : vector_vlmuls[j].suffix);
+          vectype = build_distinct_type_copy (vectype);
+          gcc_assert (vectype == TYPE_MAIN_VARIANT (vectype));
+          SET_TYPE_STRUCTURAL_EQUALITY (vectype);
+          TYPE_ARTIFICIAL (vectype) = 1;
+          TYPE_INDIVISIBLE_P (vectype) = 1;
+          add_vector_type_attribute (vectype, 1, sew, vector_vlmuls[j].vlmul, is_bool,
+                                     mangled_name);
+          make_type_sizeless (vectype);
+          internal_vector_types[i][j] = vectype;
+          lang_hooks.types.register_builtin_type (vectype, abi_name);
+        }
+    }
+}
+
 /* Initialize all compiler built-ins related to RVV that should be
    defined at start-up.  */
 void
@@ -116,6 +396,192 @@ init_builtins ()
 {
   if (!TARGET_VECTOR)
     return;
+  
+  register_general_builtin_types ();
+  register_builtin_types ();
+
+  if (in_lto_p)
+    handle_pragma_vector ();
+}
+
+/* These codes copied from ARM. */
+/* Register vector type TYPE under its risv_vector.h name.  */
+static void
+register_vector_type (unsigned int type, unsigned int lmul)
+{
+  tree vectype = internal_vector_types[type][lmul];
+  char rvv_name[NAME_MAXLEN] = {0};
+  snprintf (rvv_name, NAME_MAXLEN, "v%s%s_t", vector_type_infos[type].elem_name,
+            strcmp (vector_type_infos[type].elem_name, "bool") == 0
+                ? vector_vlmuls[lmul].boolnum
+                : vector_vlmuls[lmul].suffix);
+  tree id = get_identifier (rvv_name);
+  tree decl = build_decl (input_location, TYPE_DECL, id, vectype);
+  decl = lang_hooks.decls.pushdecl (decl);
+
+  /* Record the new RVV type if pushdecl succeeded without error.  Use
+     the ABI type otherwise, so that the type we record at least has the
+     right form, even if it doesn't have the right name.  This should give
+     better error recovery behavior than installing error_mark_node or
+     installing an incorrect type.  */
+  if (decl && TREE_CODE (decl) == TYPE_DECL &&
+      TREE_TYPE (decl) != error_mark_node &&
+      TYPE_MAIN_VARIANT (TREE_TYPE (decl)) == vectype)
+    vectype = TREE_TYPE (decl);
+
+  vector_types[0][type][lmul] = vectype;
+  vector_pointer_types[type][lmul] = build_pointer_type (vectype);
+}
+
+/* Implement #pragma riscv intrinsic vector.  */
+void
+handle_pragma_vector ()
+{
+  if (function_table)
+    {
+      error ("duplicate definition of %qs", "vector");
+      return;
+    }
+
+  /* Define the vector and tuple types.  */
+  for (unsigned int i = 0; i < NUM_VECTOR_TYPES; ++i)
+    {
+      tree eltype = scalar_types[i];
+      scalar_mode elmode =
+          (eltype == boolean_type_node) ? BImode : SCALAR_TYPE_MODE (eltype);
+
+      for (unsigned int j = 0; j < ARRAY_SIZE (vector_vlmuls); ++j)
+        {
+          if (!vector_legal_vlmul (elmode, vector_vlmuls[j].vlmul))
+            continue;
+            
+          register_vector_type (i, j);
+        }
+    }
+
+  init_def_variables ();
+
+  /* Define the functions.  */
+  function_table = new hash_table<registered_function_hasher> (1023);
+
+  for (unsigned int i = 0; i < NUM_INSN_FUNC; ++i)
+    all_vector_functions[i]->register_function ();
+}
+
+/* If TYPE is an ABI-defined RVV type, return its attribute descriptor,
+   otherwise return null.  */
+static tree
+lookup_rvv_type_attribute (const_tree type)
+{
+  if (type == error_mark_node)
+    return NULL_TREE;
+  return lookup_attribute ("RVV type", TYPE_ATTRIBUTES (type));
+}
+
+/* Return true if TYPE is a built-in RVV type defined by the ABI or RVV.  */
+bool
+builtin_type_p (const_tree type)
+{
+  return lookup_rvv_type_attribute (type);
+}
+
+/* Implement TARGET_VERIFY_TYPE_CONTEXT for RVV types.  */
+bool
+verify_type_context (location_t loc, type_context_kind context,
+                     const_tree type, bool silent_p)
+{
+  if (!sizeless_type_p (type))
+    return true;
+
+  switch (context)
+    {
+    case TCTX_SIZEOF:
+    case TCTX_STATIC_STORAGE:
+      if (!silent_p)
+        error_at (loc, "RVV type %qT does not have a fixed size", type);
+
+      return false;
+
+    case TCTX_ALIGNOF:
+      if (!silent_p)
+        error_at (loc, "RVV type %qT does not have a defined alignment", type);
+
+      return false;
+
+    case TCTX_THREAD_STORAGE:
+      if (!silent_p)
+        error_at (loc,
+                  "variables of type %qT cannot have thread-local"
+                  " storage duration",
+                  type);
+
+      return false;
+
+    case TCTX_POINTER_ARITH:
+      if (!silent_p)
+        error_at (loc, "arithmetic on pointer to RVV type %qT", type);
+
+      return false;
+
+    case TCTX_FIELD:
+      if (silent_p)
+        ;
+      else if (lang_GNU_CXX ())
+        error_at (loc, "member variables cannot have RVV type %qT", type);
+      else
+        error_at (loc, "fields cannot have RVV type %qT", type);
+
+      return false;
+
+    case TCTX_ARRAY_ELEMENT:
+      if (!silent_p)
+        error_at (loc, "array elements cannot have RVV type %qT", type);
+
+      return false;
+
+    case TCTX_ALLOCATION:
+      if (!silent_p)
+        error_at (loc, "cannot allocate objects with RVV type %qT", type);
+
+      return false;
+
+    case TCTX_DEALLOCATION:
+      if (!silent_p)
+        error_at (loc, "cannot delete objects with RVV type %qT", type);
+
+      return false;
+
+    case TCTX_EXCEPTIONS:
+      if (!silent_p)
+        error_at (loc, "cannot throw or catch RVV type %qT", type);
+
+      return false;
+
+    case TCTX_CAPTURE_BY_COPY:
+      if (!silent_p)
+        error_at (loc, "capture by copy of RVV type %qT", type);
+
+      return false;
+    }
+
+  gcc_unreachable ();
+}
+
+/* If TYPE is a built-in type defined by the RVV ABI, return the mangled name,
+   otherwise return NULL.  */
+const char *
+mangle_builtin_type (const_tree type)
+{
+  /* ??? The C++ frontend normally strips qualifiers and attributes before
+     calling this hook, adding separate mangling for attributes that affect
+     type identity.  Fortunately the type copy will have the same TYPE_NAME
+     as the original, so we can get the attributes from there.  */
+  if (TYPE_NAME (type) && TREE_CODE (TYPE_NAME (type)) == TYPE_DECL)
+    type = TREE_TYPE (TYPE_NAME (type));
+  if (tree attr = lookup_rvv_type_attribute (type))
+    if (tree id = TREE_VALUE (chain_index (4, TREE_VALUE (attr))))
+      return IDENTIFIER_POINTER (id);
+  return NULL;
 }
 
 /* Return the function decl with RVV function subcode CODE, or error_mark_node
diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h
index 6bba4c90c3a..ef91248035c 100644
--- a/gcc/config/riscv/riscv-vector-builtins.h
+++ b/gcc/config/riscv/riscv-vector-builtins.h
@@ -53,6 +53,9 @@ rtx expand_builtin (unsigned int, tree, rtx);
 bool check_builtin_call (location_t, vec<location_t>, unsigned int,
 			 tree, unsigned int, tree *);
 machine_mode vector_builtin_mode (scalar_mode, enum vlmul_field_enum);
+bool builtin_type_p (const_tree);
+bool verify_type_context (location_t, type_context_kind, const_tree, bool);
+const char * mangle_builtin_type (const_tree);
 
 } // end namespace riscv_vector
 
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index b82a38da7c9..8c78e726a19 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -67,6 +67,8 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-vectorizer.h"
 #include "tree-ssa-loop-niter.h"
 #include "rtx-vector-builder.h"
+#include "riscv-vector-builtins.h"
+#include "riscv-vector.h"
 
 /* True if X is an UNSPEC wrapper around a SYMBOL_REF or LABEL_REF.  */
 #define UNSPEC_ADDRESS_P(X)					\
@@ -5780,6 +5782,124 @@ riscv_asan_shadow_offset (void)
   return TARGET_64BIT ? (HOST_WIDE_INT_1 << 29) : 0;
 }
 
+/* Implement TARGET_VECTOR_MODE_SUPPORTED_P.  */
+
+static bool
+riscv_vector_mode_supported_p (machine_mode mode)
+{
+  if (TARGET_VECTOR && rvv_mode_p (mode))
+    return true;
+
+  return false;
+}
+
+/* Implement TARGET_VECTOR_ALIGNMENT.  */
+
+static HOST_WIDE_INT
+riscv_vector_alignment (const_tree type)
+{
+  /* ??? Checking the mode isn't ideal, but VECTOR_BOOLEAN_TYPE_P can
+     be set for non-predicate vectors of booleans.  Modes are the most
+     direct way we have of identifying real RVV predicate types.  */
+  /* FIXME: RVV didn't mention the alignment of bool, we uses
+     one byte align.  */
+  if (GET_MODE_CLASS (TYPE_MODE (type)) == MODE_VECTOR_BOOL)
+    return 8;
+
+  widest_int min_size =
+      constant_lower_bound (wi::to_poly_widest (TYPE_SIZE (type)));
+  return wi::umin (min_size, 128).to_uhwi ();
+}
+
+/* Implement target hook TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT.  */
+
+static poly_uint64
+riscv_vectorize_preferred_vector_alignment (const_tree type)
+{
+  if (rvv_mode_p (TYPE_MODE (type)))
+    {
+      /* If the length of the vector is a fixed power of 2, try to align
+      to that length, otherwise don't try to align at all.  */
+      HOST_WIDE_INT result;
+
+      if (!GET_MODE_BITSIZE (TYPE_MODE (type)).is_constant (&result) ||
+          !pow2p_hwi (result))
+        result = TYPE_ALIGN (TREE_TYPE (type));
+
+      return result;
+    }
+
+  return default_preferred_vector_alignment (type);
+}
+
+/* Implement target hook TARGET_VECTORIZE_VECTOR_ALIGNMENT_REACHABLE.  */
+
+static bool
+riscv_simd_vector_alignment_reachable (const_tree type, bool is_packed)
+{
+  if (is_packed)
+    return false;
+
+  /* For fixed-length vectors, check that the vectorizer will aim for
+     full-vector alignment.  This isn't true for generic GCC vectors
+     that are wider than the ABI maximum of 128 bits.  */
+  poly_uint64 preferred_alignment =
+      riscv_vectorize_preferred_vector_alignment (type);
+  if (TREE_CODE (TYPE_SIZE (type)) == INTEGER_CST &&
+      maybe_ne (wi::to_widest (TYPE_SIZE (type)), preferred_alignment))
+    return false;
+
+  /* Vectors whose size is <= BIGGEST_ALIGNMENT are naturally aligned.  */
+  return true;
+}
+
+/* Return true if the vector misalignment factor is supported by the
+   target.  */
+
+static bool
+riscv_builtin_support_vector_misalignment (machine_mode mode, const_tree type,
+                                           int misalignment, bool is_packed)
+{
+  return default_builtin_support_vector_misalignment (mode, type, misalignment,
+                                                      is_packed);
+}
+
+/* Implement TARGET_COMPATIBLE_VECTOR_TYPES_P.  */
+
+static bool
+riscv_compatible_vector_types_p (const_tree type1, const_tree type2)
+{
+  return (riscv_vector::builtin_type_p (type1) ==
+          riscv_vector::builtin_type_p (type2));
+}
+
+/* Implement TARGET_VERIFY_TYPE_CONTEXT.  */
+
+static bool
+riscv_verify_type_context (location_t loc, type_context_kind context,
+                           const_tree type, bool silent_p)
+{
+  return riscv_vector::verify_type_context (loc, context, type, silent_p);
+}
+
+/* Implement TARGET_MANGLE_TYPE.  */
+
+static const char *
+riscv_mangle_type (const_tree type)
+{
+  /* Mangle all vector type for vector extension.  */
+  /* The mangle name follows the rule of aarch64
+     that is "u" + length of (abi_name) + abi_name. */
+  if (TYPE_NAME (type) != NULL)
+    {
+      const char *res = riscv_vector::mangle_builtin_type (type);
+      if (res)
+        return res;
+    }
+  /* Use the default mangling.  */
+  return NULL;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -5974,6 +6094,30 @@ riscv_asan_shadow_offset (void)
 #define TARGET_DEFAULT_TARGET_FLAGS (MASK_BIG_ENDIAN)
 #endif
 
+#undef TARGET_VECTOR_MODE_SUPPORTED_P
+#define TARGET_VECTOR_MODE_SUPPORTED_P riscv_vector_mode_supported_p
+
+#undef TARGET_VECTOR_ALIGNMENT
+#define TARGET_VECTOR_ALIGNMENT riscv_vector_alignment
+
+#undef TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT
+#define TARGET_VECTORIZE_PREFERRED_VECTOR_ALIGNMENT riscv_vectorize_preferred_vector_alignment
+
+#undef TARGET_VECTORIZE_VECTOR_ALIGNMENT_REACHABLE
+#define TARGET_VECTORIZE_VECTOR_ALIGNMENT_REACHABLE riscv_simd_vector_alignment_reachable
+
+#undef TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT
+#define TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT riscv_builtin_support_vector_misalignment
+
+#undef TARGET_COMPATIBLE_VECTOR_TYPES_P
+#define TARGET_COMPATIBLE_VECTOR_TYPES_P riscv_compatible_vector_types_p
+
+#undef TARGET_VERIFY_TYPE_CONTEXT
+#define TARGET_VERIFY_TYPE_CONTEXT riscv_verify_type_context
+
+#undef TARGET_MANGLE_TYPE
+#define TARGET_MANGLE_TYPE riscv_mangle_type
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 04/21] Add RVV intrinsic enable #pragma riscv intrinsic "vector" and introduce RVV header "riscv_vector.h"
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (2 preceding siblings ...)
  2022-05-31  8:49 ` [PATCH 03/21] Add RVV datatypes juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 05/21] Add RVV configuration intrinsic juzhe.zhong
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config.gcc: New header.
        * config/riscv/riscv-c.cc (riscv_pragma_intrinsic): New function.
        (riscv_check_builtin_call): New function.
        (riscv_register_pragmas): New function.
        * config/riscv/riscv-protos.h (riscv_register_pragmas): New function.
        * config/riscv/riscv.h (REGISTER_TARGET_PRAGMAS): New targethook.
        * config/riscv/riscv_vector.h: New file.
				
---
 gcc/config.gcc                  |  1 +
 gcc/config/riscv/riscv-c.cc     | 65 +++++++++++++++++++++++++++++++++
 gcc/config/riscv/riscv-protos.h |  1 +
 gcc/config/riscv/riscv.h        |  2 +
 gcc/config/riscv/riscv_vector.h | 41 +++++++++++++++++++++
 5 files changed, 110 insertions(+)
 create mode 100644 gcc/config/riscv/riscv_vector.h

diff --git a/gcc/config.gcc b/gcc/config.gcc
index bdda82ae576..042a7a17737 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -517,6 +517,7 @@ pru-*-*)
 	;;
 riscv*)
 	cpu_type=riscv
+	extra_headers="riscv_vector.h"
 	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-vector.o riscv-vector-builtins-functions.o riscv-vector-builtins.o"
 	d_target_objs="riscv-d.o"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-builtins.cc \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index eb7ef09297e..5839e849092 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -25,9 +25,17 @@ along with GCC; see the file COPYING3.  If not see
 #include "system.h"
 #include "coretypes.h"
 #include "tm.h"
+#include "input.h"
+#include "memmodel.h"
+#include "tm_p.h"
+#include "flags.h"
 #include "c-family/c-common.h"
 #include "cpplib.h"
+#include "c-family/c-pragma.h"
+#include "langhooks.h"
+#include "target.h"
 #include "riscv-subset.h"
+#include "riscv-vector-builtins.h"
 
 #define builtin_define(TXT) cpp_define (pfile, TXT)
 
@@ -155,3 +163,60 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
       builtin_define_with_int_value (buf, version_value);
     }
 }
+
+/* Implement "#pragma riscv intrinsic".  */
+static void
+riscv_pragma_intrinsic (cpp_reader *)
+{
+  tree x;
+
+  if (pragma_lex (&x) != CPP_STRING)
+    {
+      error ("%<#pragma riscv intrinsic%> requires a string parameter");
+      return;
+    }
+
+  const char *name = TREE_STRING_POINTER (x);
+
+  if (strcmp (name, "vector") == 0)
+    {
+      if (!TARGET_VECTOR)
+	error ("%<#pragma riscv intrinsic%> option %qs needs 'V' extension enabled", name);
+
+      riscv_vector::handle_pragma_vector ();
+    }
+  else
+    error ("unknown %<#pragma riscv intrinsic%> option %qs", name);
+}
+
+/* Implement TARGET_CHECK_BUILTIN_CALL.  */
+
+static bool
+riscv_check_builtin_call (location_t loc, vec<location_t> arg_loc,
+			  tree fndecl, tree orig_fndecl,
+			  unsigned int nargs, tree *args)
+{
+  unsigned int code = DECL_MD_FUNCTION_CODE (fndecl);
+  unsigned int subcode = code >> RISCV_BUILTIN_SHIFT;
+
+  switch (code & RISCV_BUILTIN_CLASS)
+    {
+    case RISCV_BUILTIN_GENERAL:
+      return true;
+
+    case RISCV_BUILTIN_VECTOR:
+      return riscv_vector::check_builtin_call (loc, arg_loc, subcode,
+					       orig_fndecl, nargs, args);
+    }
+
+  gcc_unreachable ();
+}
+
+/* Implement REGISTER_TARGET_PRAGMAS.  */
+
+void
+riscv_register_pragmas (void)
+{
+  targetm.check_builtin_call = riscv_check_builtin_call;
+  c_register_pragma ("riscv", "intrinsic", riscv_pragma_intrinsic);
+}
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 1cb3586d1f1..4a4ac645f55 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -116,6 +116,7 @@ extern bool rvv_legitimate_poly_int_p (rtx);
 extern unsigned int rvv_offset_temporaries (bool, poly_int64);
 extern enum vlmul_field_enum riscv_classify_vlmul_field (machine_mode);
 extern int rvv_regsize (machine_mode);
+extern void riscv_register_pragmas (void);
 
 /* We classify builtin types into two classes:
    1. General builtin class which is using the
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 8f56a5a4746..cb4cfc0f73e 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -1067,4 +1067,6 @@ extern void riscv_remove_unneeded_save_restore_calls (void);
 
 #define TARGET_SUPPORTS_WIDE_INT 1
 
+#define REGISTER_TARGET_PRAGMAS() riscv_register_pragmas ()
+
 #endif /* ! GCC_RISCV_H */
diff --git a/gcc/config/riscv/riscv_vector.h b/gcc/config/riscv/riscv_vector.h
new file mode 100644
index 00000000000..ef1820a07cb
--- /dev/null
+++ b/gcc/config/riscv/riscv_vector.h
@@ -0,0 +1,41 @@
+/* Header of intrinsics for RISC-V 'V' Extension for GNU compiler.
+   Copyright (C) 2021-2021 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai).
+   Based on MIPS target for GNU compiler.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef __RISCV_VECTOR_H
+#define __RISCV_VECTOR_H
+
+#include <stdint.h>
+#include <stddef.h>
+
+typedef float float32_t;
+typedef double float64_t;
+
+typedef float __float32_t;
+typedef double __float64_t;
+
+/* NOTE: This implementation of riscv_vector.h is intentionally short.  It does
+   not define the RVV types and intrinsic functions directly in C and C++
+   code, but instead uses the following pragma to tell GCC to insert the
+   necessary type and function definitions itself.  The net effect is the
+   same, and the file is a complete implementation of riscv_vector.h.  */
+#pragma riscv intrinsic "vector"
+
+#endif // __RISCV_VECTOR_H
\ No newline at end of file
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 05/21]   Add RVV configuration intrinsic
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (3 preceding siblings ...)
  2022-05-31  8:49 ` [PATCH 04/21] Add RVV intrinsic enable #pragma riscv intrinsic "vector" and introduce RVV header "riscv_vector.h" juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 06/21] Add insert-vsetvl pass juzhe.zhong
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-protos.h (riscv_register_pragmas): New function.
        (riscv_classify_vlmul_field): New enum.
        (rvv_classify_vlmul_field): New enum.
        (rvv_parse_vsew_field): New enum.
        (rvv_parse_vlmul_field): New enum.
        (rvv_parse_vta_field): New enum.
        (rvv_parse_vma_field): New enum.
        * config/riscv/riscv-vector-builtins-functions.cc (get_vtype_for_mode): New function.
        (mode2data_type_str): New function.
        (config::call_properties): New function.
        (config::assemble_name): New function.
        (config::get_return_type): New function.
        (vsetvl::get_argument_types): New function.
        (vsetvl::expand): New function.
        (vsetvlmax::expand): New function.
        * config/riscv/riscv-vector-builtins-functions.def:
        (vsetvl): New macro define.
        (vsetvlmax): New macro define.
        * config/riscv/riscv-vector-builtins-functions.h (class config): New class.
        (class vsetvl): New class.
        (class vsetvlmax): New class.
        * config/riscv/riscv-vector-builtins-iterators.def (VI): New iterator.
        * config/riscv/riscv-vector.cc (rvv_parse_vsew_field): New function.
        (rvv_parse_vlmul_field): New function.
        (rvv_parse_vta_field): New function.
        (rvv_parse_vma_field): New function.
        * config/riscv/riscv-vector.h (rvv_parse_vsew_field): New function.
        (rvv_parse_vlmul_field): New function.
        (rvv_parse_vta_field): New function.
        (rvv_parse_vma_field): New function.
        * config/riscv/riscv.md: Add X0_REGNUM constant.
        * config/riscv/vector-iterators.md (unspec): New unspec.
        * config/riscv/vector.md: New file.

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/rvv/intrinsic/rvv-intrinsic.exp: New test.
        * gcc.target/riscv/rvv/intrinsic/vsetvl.c: New test.
        
---
 gcc/config/riscv/riscv-protos.h               |   8 +-
 .../riscv/riscv-vector-builtins-functions.cc  | 232 ++++++
 .../riscv/riscv-vector-builtins-functions.def |   6 +
 .../riscv/riscv-vector-builtins-functions.h   |  36 +
 .../riscv/riscv-vector-builtins-iterators.def |  23 +
 gcc/config/riscv/riscv-vector.cc              |  28 +
 gcc/config/riscv/riscv-vector.h               |   4 +
 gcc/config/riscv/riscv.md                     |   6 +-
 gcc/config/riscv/vector-iterators.md          |  14 +-
 gcc/config/riscv/vector.md                    | 140 ++++
 .../riscv/rvv/intrinsic/rvv-intrinsic.exp     |  47 ++
 .../gcc.target/riscv/rvv/intrinsic/vsetvl.c   | 733 ++++++++++++++++++
 12 files changed, 1273 insertions(+), 4 deletions(-)
 create mode 100644 gcc/config/riscv/vector.md
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/rvv-intrinsic.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsetvl.c

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 4a4ac645f55..cae2974b54f 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -77,6 +77,7 @@ extern bool riscv_gpr_save_operation_p (rtx);
 
 /* Routines implemented in riscv-c.cc.  */
 void riscv_cpu_cpp_builtins (cpp_reader *);
+void riscv_register_pragmas (void);
 
 /* Routines implemented in riscv-d.cc  */
 extern void riscv_d_target_versions (void);
@@ -114,9 +115,12 @@ extern const riscv_cpu_info *riscv_find_cpu (const char *);
 extern bool rvv_mode_p (machine_mode);
 extern bool rvv_legitimate_poly_int_p (rtx);
 extern unsigned int rvv_offset_temporaries (bool, poly_int64);
-extern enum vlmul_field_enum riscv_classify_vlmul_field (machine_mode);
+extern enum vlmul_field_enum rvv_classify_vlmul_field (machine_mode);
+extern unsigned int rvv_parse_vsew_field (unsigned int);
+extern unsigned int rvv_parse_vlmul_field (unsigned int);
+extern bool rvv_parse_vta_field (unsigned int);
+extern bool rvv_parse_vma_field (unsigned int);
 extern int rvv_regsize (machine_mode);
-extern void riscv_register_pragmas (void);
 
 /* We classify builtin types into two classes:
    1. General builtin class which is using the
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc
index 19bcb66a83f..0acda8f671e 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc
@@ -49,6 +49,164 @@ static const unsigned int CP_WRITE_CSR = 1U << 6;
    when the required extension is disabled.  */
 static bool reported_missing_extension_p;
 
+/* Generate vtype bitmap for a specific machine mode. */
+static unsigned int
+get_vtype_for_mode (machine_mode mode)
+{
+  switch (mode)
+    {
+    case VNx2QImode:
+    case VNx2BImode:
+      return 0x45;
+
+    case VNx4QImode:
+    case VNx4BImode:
+      return 0x46;
+
+    case VNx8QImode:
+    case VNx8BImode:
+      return 0x47;
+
+    case VNx16QImode:
+    case VNx16BImode:
+      return 0x40;
+
+    case VNx32QImode:
+    case VNx32BImode:
+      return 0x41;
+
+    case VNx64QImode:
+    case VNx64BImode:
+      return 0x42;
+
+    case VNx128QImode:
+    case VNx128BImode:
+      return 0x43;
+
+    case VNx2HImode:
+      return 0x4e;
+
+    case VNx4HImode:
+      return 0x4f;
+
+    case VNx8HImode:
+      return 0x48;
+
+    case VNx16HImode:
+      return 0x49;
+
+    case VNx32HImode:
+      return 0x4a;
+
+    case VNx64HImode:
+      return 0x4b;
+
+    case VNx2SImode:
+    case VNx2SFmode:
+      return 0x57;
+
+    case VNx4SImode:
+    case VNx4SFmode:
+      return 0x50;
+
+    case VNx8SImode:
+    case VNx8SFmode:
+      return 0x51;
+
+    case VNx16SImode:
+    case VNx16SFmode:
+      return 0x52;
+
+    case VNx32SImode:
+    case VNx32SFmode:
+      return 0x53;
+
+    case VNx2DImode:
+    case VNx2DFmode:
+      return 0x58;
+
+    case VNx4DImode:
+    case VNx4DFmode:
+      return 0x59;
+
+    case VNx8DImode:
+    case VNx8DFmode:
+      return 0x5a;
+
+    case VNx16DImode:
+      return 0x5b;
+
+    default:
+      break;
+    }
+
+  gcc_unreachable ();
+}
+
+static const char *
+mode2data_type_str (machine_mode mode, bool u, bool ie)
+{
+  switch (mode)
+    {
+  #define MODE2DATA_TYPE_STR(MODE, NBITS, LMUL) \
+    case VNx##MODE##mode: \
+      return ie ? "_e"#NBITS""#LMUL"" : u ? "_u"#NBITS""#LMUL"" : "_i"#NBITS""#LMUL"";
+    MODE2DATA_TYPE_STR (2QI, 8, mf8)
+    MODE2DATA_TYPE_STR (4QI, 8, mf4)
+    MODE2DATA_TYPE_STR (8QI, 8, mf2)
+    MODE2DATA_TYPE_STR (16QI, 8, m1)
+    MODE2DATA_TYPE_STR (32QI, 8, m2)
+    MODE2DATA_TYPE_STR (64QI, 8, m4)
+    MODE2DATA_TYPE_STR (128QI, 8, m8)
+    MODE2DATA_TYPE_STR (2HI, 16, mf4)
+    MODE2DATA_TYPE_STR (4HI, 16, mf2)
+    MODE2DATA_TYPE_STR (8HI, 16, m1)
+    MODE2DATA_TYPE_STR (16HI, 16, m2)
+    MODE2DATA_TYPE_STR (32HI, 16, m4)
+    MODE2DATA_TYPE_STR (64HI, 16, m8)
+    MODE2DATA_TYPE_STR (2SI, 32, mf2)
+    MODE2DATA_TYPE_STR (4SI, 32, m1)
+    MODE2DATA_TYPE_STR (8SI, 32, m2)
+    MODE2DATA_TYPE_STR (16SI, 32, m4)
+    MODE2DATA_TYPE_STR (32SI, 32, m8)
+    MODE2DATA_TYPE_STR (2DI, 64, m1)
+    MODE2DATA_TYPE_STR (4DI, 64, m2)
+    MODE2DATA_TYPE_STR (8DI, 64, m4)
+    MODE2DATA_TYPE_STR (16DI, 64, m8)
+  #undef MODE2DATA_TYPE_STR
+  #define MODE2DATA_TYPE_STR(MODE, NBITS, LMUL) \
+    case VNx##MODE##mode: \
+      return "_f"#NBITS""#LMUL"";
+    MODE2DATA_TYPE_STR (2SF, 32, mf2)
+    MODE2DATA_TYPE_STR (4SF, 32, m1)
+    MODE2DATA_TYPE_STR (8SF, 32, m2)
+    MODE2DATA_TYPE_STR (16SF, 32, m4)
+    MODE2DATA_TYPE_STR (32SF, 32, m8)
+    MODE2DATA_TYPE_STR (2DF, 64, m1)
+    MODE2DATA_TYPE_STR (4DF, 64, m2)
+    MODE2DATA_TYPE_STR (8DF, 64, m4)
+    MODE2DATA_TYPE_STR (16DF, 64, m8)
+  #undef MODE2DATA_TYPE_STR
+  case VNx2BImode: return "_b64";
+  case VNx4BImode: return "_b32";
+  case VNx8BImode: return "_b16";
+  case VNx16BImode: return "_b8"; 
+  case VNx32BImode: return "_b4"; 
+  case VNx64BImode: return "_b2"; 
+  case VNx128BImode: return "_b1";
+  case QImode: return u ? "_u8" : "_i8";
+  case HImode: return u ? "_u16" : "_i16";
+  case SImode: return u ? "_u32" : "_i32";
+  case DImode: return u ? "_u64" : "_i64";
+  case SFmode: return "_f32";
+  case DFmode: return "_f64";
+    default:
+      break;
+    }
+
+  gcc_unreachable ();
+}
+
 static tree
 mode2mask_t (machine_mode mode)
 {
@@ -990,6 +1148,80 @@ function_builder::register_function ()
     }
 }
 
+/* A function implementation for config builder */
+unsigned int
+config::call_properties () const
+{
+  return CP_READ_CSR | CP_WRITE_CSR;
+}
+
+char *
+config::assemble_name (function_instance &instance)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  const char *name = instance.get_base_name ();
+  const char *dt = mode2data_type_str (mode, false, true);
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s", name, dt);
+  return nullptr;
+}
+
+tree
+config::get_return_type (const function_instance &) const
+{
+  return size_type_node;
+}
+
+/* A function implementation for vsetvl builder */
+void
+vsetvl::get_argument_types (const function_instance &,
+                            vec<tree> &argument_types) const
+{
+  argument_types.quick_push (size_type_node);
+}
+
+rtx
+vsetvl::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  struct expand_operand ops[MAX_RECOG_OPERANDS];
+  tree fndecl = TREE_OPERAND (CALL_EXPR_FN (exp), 0);
+
+  /* Map any target to operand 0.  */
+  int opno = 0;
+  create_output_operand (&ops[opno++], target, Pmode);
+  unsigned int vtype =
+      get_vtype_for_mode (instance.get_arg_pattern ().arg_list[0]);
+  enum insn_code icode = code_for_vsetvl (Pmode);
+  add_input_operand (&ops[opno++], exp, 0);
+  /* create vtype input operand.  */
+  create_input_operand (&ops[opno++], GEN_INT (vtype), Pmode);
+  /* Map the arguments to the other operands.  */
+  gcc_assert (opno == insn_data[icode].n_generator_args);
+  return generate_builtin_insn (icode, opno, ops,
+                                !function_returns_void_p (fndecl));
+}
+
+/* A function implementation for vsetvlmax builder */
+rtx
+vsetvlmax::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  struct expand_operand ops[MAX_RECOG_OPERANDS];
+  tree fndecl = TREE_OPERAND (CALL_EXPR_FN (exp), 0);
+
+  /* Map any target to operand 0.  */
+  int opno = 0;
+  create_output_operand (&ops[opno++], target, Pmode);
+  unsigned int vtype =
+      get_vtype_for_mode (instance.get_arg_pattern ().arg_list[0]);
+  enum insn_code icode = code_for_vsetvl (Pmode);
+  create_input_operand (&ops[opno++], gen_rtx_REG (Pmode, X0_REGNUM), Pmode);
+  /* create vtype input operand.  */
+  create_input_operand (&ops[opno++], GEN_INT (vtype), Pmode);
+  /* Map the arguments to the other operands.  */
+  gcc_assert (opno == insn_data[icode].n_generator_args);
+  return generate_builtin_insn (icode, opno, ops,
+                                !function_returns_void_p (fndecl));
+}
+
 } // end namespace riscv_vector
 
 using namespace riscv_vector;
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index f6161012813..666e8503d81 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -29,6 +29,12 @@ along with GCC; see the file COPYING3. If not see
 #define VATTR(A, B)
 #endif
 
+/* base_name, class_name, arg_pattern, intrinsic_type, pred_type, operation_type */
+#define REQUIRED_EXTENSIONS (RISCV_TARGET_VECTOR)
+/* 6. Configuration-Setting Instructions. */
+DEF_RVV_FUNCTION(vsetvl, vsetvl, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
+DEF_RVV_FUNCTION(vsetvlmax, vsetvlmax, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
+#undef REQUIRED_EXTENSIONS
 #undef DEF_RVV_FUNCTION
 #undef VITER
 #undef VATTR
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.h b/gcc/config/riscv/riscv-vector-builtins-functions.h
index 1b769743857..9846ded1155 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.h
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.h
@@ -486,6 +486,42 @@ struct registered_function_hasher : nofree_ptr_hash<registered_function>
   static bool equal (value_type, const compare_type &);
 };
 
+/* A function_base for config functions.  */
+class config : public function_builder
+{
+public:
+  // use the same construction function as the function_builder
+  using function_builder::function_builder;
+
+  virtual unsigned int call_properties () const override;
+  
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual tree get_return_type (const function_instance &) const override;
+};
+
+/* A function_base for vsetvl functions.  */
+class vsetvl : public config
+{
+public:
+  // use the same construction function as the config
+  using config::config;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for config functions.  */
+class vsetvlmax : public config
+{
+public:
+  // use the same construction function as the config
+  using config::config;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
 } // namespace riscv_vector
 
 #endif // end GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-iterators.def b/gcc/config/riscv/riscv-vector-builtins-iterators.def
index ac0c37e12d4..cc968f5534f 100644
--- a/gcc/config/riscv/riscv-vector-builtins-iterators.def
+++ b/gcc/config/riscv/riscv-vector-builtins-iterators.def
@@ -7,6 +7,29 @@
 #define DEF_RISCV_ARG_MODE_ATTR(A, B, C, D, E)
 #endif
 
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VI, 22)
+DEF_RISCV_ARG_MODE_ATTR(VI, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 6, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 7, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 8, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 9, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 10, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 11, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 12, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 13, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 15, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 16, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 17, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 18, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 19, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 20, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VI, 21, VNx16DI, VNx16DI, TARGET_ANY)
 
 #undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
 #undef DEF_RISCV_ARG_MODE_ATTR
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index 87dc6739f4f..a9c8b290104 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -205,6 +205,34 @@ rvv_classify_vlmul_field (machine_mode mode)
 	gcc_unreachable ();
 }
 
+/* Return the vsew field for a vtype bitmap. */
+unsigned int
+rvv_parse_vsew_field (unsigned int vtype)
+{
+  return (vtype >> 3) & 0x7;
+}
+
+/* Return the vlmul field for a vtype bitmap. */
+unsigned int
+rvv_parse_vlmul_field (unsigned int vtype)
+{
+  return vtype & 0x7;
+}
+
+/* Return the vta field for a vtype bitmap. */
+bool
+rvv_parse_vta_field (unsigned int vtype)
+{
+  return (vtype & 0x40) != 0;
+}
+
+/* Return the vma field for a vtype bitmap. */
+bool
+rvv_parse_vma_field (unsigned int vtype)
+{
+  return (vtype & 0x80) != 0;
+}
+
 /* Return vlmul register size for a machine mode. */
 
 int
diff --git a/gcc/config/riscv/riscv-vector.h b/gcc/config/riscv/riscv-vector.h
index 62df507e7df..2c242959077 100644
--- a/gcc/config/riscv/riscv-vector.h
+++ b/gcc/config/riscv/riscv-vector.h
@@ -24,6 +24,10 @@ bool riscv_vector_mode_p (machine_mode);
 bool rvv_legitimate_poly_int_p (rtx);
 unsigned int rvv_offset_temporaries (bool, poly_int64);
 vlmul_field_enum rvv_classify_vlmul_field (machine_mode);
+extern unsigned int rvv_parse_vsew_field (unsigned int);
+extern unsigned int rvv_parse_vlmul_field (unsigned int);
+extern bool rvv_parse_vta_field (unsigned int);
+extern bool rvv_parse_vma_field (unsigned int);
 int rvv_regsize (machine_mode);
 opt_machine_mode rvv_get_mask_mode (machine_mode);
 #endif // GCC_RISCV_VECTOR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 8e880ba8599..2b0b76458a7 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -106,6 +106,7 @@
    ;; Constant helper for RVV 
    (VL_REGNUM			66)
    (VTYPE_REGNUM		67)
+   (X0_REGNUM			0)
 ])
 
 (include "predicates.md")
@@ -182,10 +183,12 @@
 ;; nop		no operation
 ;; ghost	an instruction that produces no real code
 ;; bitmanip	bit manipulation instructions
+;; vsetvl vector configuration setting
 (define_attr "type"
   "unknown,branch,jump,call,load,fpload,store,fpstore,
    mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
-   fmadd,fdiv,fcmp,fcvt,fsqrt,multi,auipc,sfb_alu,nop,ghost,bitmanip,rotate"
+   fmadd,fdiv,fcmp,fcvt,fsqrt,multi,auipc,sfb_alu,nop,ghost,bitmanip,rotate,
+   vsetvl"
   (cond [(eq_attr "got" "load") (const_string "load")
 
 	 ;; If a doubleword move uses these expensive instructions,
@@ -2952,3 +2955,4 @@
 (include "pic.md")
 (include "generic.md")
 (include "sifive-7.md")
+(include "vector.md")
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 450e20c44ce..3e0699de86c 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -16,4 +16,16 @@
 ;;
 ;; You should have received a copy of the GNU General Public License
 ;; along with GCC; see the file COPYING3.  If not see
-;; <http://www.gnu.org/licenses/>.
\ No newline at end of file
+;; <http://www.gnu.org/licenses/>.
+
+(define_c_enum "unspec" [
+  ;; vsetvli.
+  UNSPEC_VSETVLI
+])
+
+;; All integer vector modes supported for RVV.
+(define_mode_iterator VI [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI VNx128QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx2SI VNx4SI VNx8SI VNx16SI VNx32SI
+  VNx2DI VNx4DI VNx8DI VNx16DI])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
new file mode 100644
index 00000000000..31fdec981b9
--- /dev/null
+++ b/gcc/config/riscv/vector.md
@@ -0,0 +1,140 @@
+;; Machine description for RISC-V vector operations.
+;; Copyright (C) 2022-2022 Free Software Foundation, Inc.
+;; Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+;; This file describes the RISC-V instructions from the standard 'V' Vector
+;; extension, version 1.0.
+;;
+;; This file include :
+;;
+;; - RVV intrinsic implmentation (Document:https://github.com/riscv/rvv-intrinsic-doc)
+
+(include "vector-iterators.md")
+;; ===============================================================================
+;; == Intrinsics
+;; ===============================================================================
+
+;; -------------------------------------------------------------------------------
+;; ---- 6. Configuration-Setting Instructions
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 6.1 vsetvli/vsetivl/vsetvl instructions
+;; -------------------------------------------------------------------------------
+
+;; we dont't define vsetvli as unspec_volatile,
+;; because we want this instruction can be scheduling
+;; in the future.
+;; This means these instructions will be deleted when
+;; there is no instructions using vl or vtype in the following.
+;; rd  | rs1 | AVL value | Effect on vl
+;; -   | !x0 | x[rs1]    | Normal stripmining
+;; !x0 | x0  | ~0        | Set vl to VLMAX
+(define_insn "@vsetvl_<mode>"
+  [(parallel
+    [(set (match_operand:X 0 "register_operand" "=r")
+      (unspec:X
+        [(match_operand:X 1 "csr_operand" "rK")] UNSPEC_VSETVLI))
+     (set (reg:SI VL_REGNUM)
+       (unspec:SI
+        [(match_dup 1)] UNSPEC_VSETVLI))
+     (set (reg:SI VTYPE_REGNUM)
+       (unspec:SI
+        [(match_operand 2 "const_int_operand")] UNSPEC_VSETVLI))])]
+  "TARGET_VECTOR"
+  {
+    char buf[64];
+    gcc_assert (CONST_INT_P (operands[2]));
+    const char *insn = satisfies_constraint_K (operands[1]) ? "vsetivli\t%0,%1"
+        : "vsetvli\t%0,%1";
+    unsigned int vsew = rvv_parse_vsew_field (INTVAL (operands[2]));
+    unsigned int vlmul = rvv_parse_vlmul_field (INTVAL (operands[2]));
+    unsigned int vta = rvv_parse_vta_field (INTVAL (operands[2]));
+    unsigned int vma = rvv_parse_vma_field (INTVAL (operands[2]));
+    const char *sew = vsew == 0 ? "e8"  : vsew == 1 ? "e16"
+          : vsew == 2 ? "e32" : "e64";
+    const char *lmul = vlmul == 0 ? "m1" : vlmul == 1 ? "m2"
+          : vlmul == 2 ? "m4" : vlmul == 3 ? "m8"
+          : vlmul == 5 ? "mf8" : vlmul == 6 ? "mf4" : "mf2";
+    const char *ta = vta == 0 ? "tu" : "ta";
+    const char *ma = vma == 0 ? "mu" : "ma";
+    snprintf (buf, sizeof (buf), "%s,%s,%s,%s,%s", insn, sew, lmul, ta, ma);
+    output_asm_insn (buf, operands);
+    return "";
+  }
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "none")])
+
+(define_insn "vsetvl_zero_zero"
+  [(set (reg:SI VTYPE_REGNUM)
+    (unspec:SI
+      [(match_operand 0 "const_int_operand")] UNSPEC_VSETVLI))]
+  "TARGET_VECTOR"
+  {
+    char buf[64];
+    gcc_assert (CONST_INT_P (operands[0]));
+    const char *insn = "vsetvli\tzero,zero";
+    unsigned int vsew = rvv_parse_vsew_field (INTVAL (operands[0]));
+    unsigned int vlmul = rvv_parse_vlmul_field (INTVAL (operands[0]));
+    unsigned int vta = rvv_parse_vta_field (INTVAL (operands[0]));
+    unsigned int vma = rvv_parse_vma_field (INTVAL (operands[0]));
+    const char *sew = vsew == 0 ? "e8"  : vsew == 1 ? "e16"
+          : vsew == 2 ? "e32" : "e64";
+    const char *lmul = vlmul == 0 ? "m1" : vlmul == 1 ? "m2"
+          : vlmul == 2 ? "m4" : vlmul == 3 ? "m8"
+          : vlmul == 5 ? "mf8" : vlmul == 6 ? "mf4" : "mf2";
+    const char *ta = vta == 0 ? "tu" : "ta";
+    const char *ma = vma == 0 ? "mu" : "ma";
+    snprintf (buf, sizeof (buf), "%s,%s,%s,%s,%s", insn, sew, lmul, ta, ma);
+    output_asm_insn (buf, operands);
+    return "";
+  }
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "none")])
+
+(define_insn "@vsetvl_zero_<mode>"
+  [(parallel
+    [(set (reg:SI VL_REGNUM)
+       (unspec:SI
+        [(match_operand:X 0 "csr_operand" "rK")] UNSPEC_VSETVLI))
+     (set (reg:SI VTYPE_REGNUM)
+       (unspec:SI
+        [(match_operand 1 "const_int_operand")] UNSPEC_VSETVLI))])]
+  "TARGET_VECTOR"
+  {
+    char buf[64];
+    gcc_assert (CONST_INT_P (operands[1]));
+    const char *insn = satisfies_constraint_K (operands[0]) ? "vsetivli\tzero,%0"
+        : "vsetvli\tzero,%0";
+    unsigned int vsew = rvv_parse_vsew_field (INTVAL (operands[1]));
+    unsigned int vlmul = rvv_parse_vlmul_field (INTVAL (operands[1]));
+    unsigned int vta = rvv_parse_vta_field (INTVAL (operands[1]));
+    unsigned int vma = rvv_parse_vma_field (INTVAL (operands[1]));
+    const char *sew = vsew == 0 ? "e8"  : vsew == 1 ? "e16"
+          : vsew == 2 ? "e32" : "e64";
+    const char *lmul = vlmul == 0 ? "m1" : vlmul == 1 ? "m2"
+          : vlmul == 2 ? "m4" : vlmul == 3 ? "m8"
+          : vlmul == 5 ? "mf8" : vlmul == 6 ? "mf4" : "mf2";
+    const char *ta = vta == 0 ? "tu" : "ta";
+    const char *ma = vma == 0 ? "mu" : "ma";
+    snprintf (buf, sizeof (buf), "%s,%s,%s,%s,%s", insn, sew, lmul, ta, ma);
+    output_asm_insn (buf, operands);
+    return "";
+  }
+  [(set_attr "type" "vsetvl")
+   (set_attr "mode" "none")])
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/rvv-intrinsic.exp b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/rvv-intrinsic.exp
new file mode 100644
index 00000000000..b6c66e88759
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/rvv-intrinsic.exp
@@ -0,0 +1,47 @@
+# Copyright (C) 2022-2022 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+
+set gcc_march "rv64gcv_zfh"
+if [istarget riscv32-*-*] then {
+  set gcc_march "rv32gcv_zfh"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+set CFLAGS "$DEFAULT_CFLAGS -march=$gcc_march -O3"
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] \
+	"" $CFLAGS
+
+# All done.
+dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsetvl.c b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsetvl.c
new file mode 100644
index 00000000000..264537de402
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/vsetvl.c
@@ -0,0 +1,733 @@
+
+/* { dg-do compile } */
+/* { dg-skip-if "test vector intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
+
+#include <stddef.h>
+#include <riscv_vector.h>
+
+
+size_t test_vsetvl_e8mf8_imm0()
+{
+  size_t vl = vsetvl_e8mf8(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*mf8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf8_imm31()
+{
+  size_t vl = vsetvl_e8mf8(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*mf8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf8_imm32()
+{
+  size_t vl = vsetvl_e8mf8(32);
+  return vl;
+}
+size_t test_vsetvl_e8mf8(size_t avl)
+{
+  size_t vl = vsetvl_e8mf8(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*mf8,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8mf8()
+{
+  size_t vl = vsetvlmax_e8mf8();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*mf8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf4_imm0()
+{
+  size_t vl = vsetvl_e8mf4(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*mf4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf4_imm31()
+{
+  size_t vl = vsetvl_e8mf4(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*mf4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf4_imm32()
+{
+  size_t vl = vsetvl_e8mf4(32);
+  return vl;
+}
+size_t test_vsetvl_e8mf4(size_t avl)
+{
+  size_t vl = vsetvl_e8mf4(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*mf4,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8mf4()
+{
+  size_t vl = vsetvlmax_e8mf4();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*mf4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf2_imm0()
+{
+  size_t vl = vsetvl_e8mf2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf2_imm31()
+{
+  size_t vl = vsetvl_e8mf2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8mf2_imm32()
+{
+  size_t vl = vsetvl_e8mf2(32);
+  return vl;
+}
+size_t test_vsetvl_e8mf2(size_t avl)
+{
+  size_t vl = vsetvl_e8mf2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*mf2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8mf2()
+{
+  size_t vl = vsetvlmax_e8mf2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m1_imm0()
+{
+  size_t vl = vsetvl_e8m1(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m1_imm31()
+{
+  size_t vl = vsetvl_e8m1(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m1_imm32()
+{
+  size_t vl = vsetvl_e8m1(32);
+  return vl;
+}
+size_t test_vsetvl_e8m1(size_t avl)
+{
+  size_t vl = vsetvl_e8m1(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*m1,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8m1()
+{
+  size_t vl = vsetvlmax_e8m1();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m2_imm0()
+{
+  size_t vl = vsetvl_e8m2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m2_imm31()
+{
+  size_t vl = vsetvl_e8m2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m2_imm32()
+{
+  size_t vl = vsetvl_e8m2(32);
+  return vl;
+}
+size_t test_vsetvl_e8m2(size_t avl)
+{
+  size_t vl = vsetvl_e8m2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*m2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8m2()
+{
+  size_t vl = vsetvlmax_e8m2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m4_imm0()
+{
+  size_t vl = vsetvl_e8m4(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m4_imm31()
+{
+  size_t vl = vsetvl_e8m4(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m4_imm32()
+{
+  size_t vl = vsetvl_e8m4(32);
+  return vl;
+}
+size_t test_vsetvl_e8m4(size_t avl)
+{
+  size_t vl = vsetvl_e8m4(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*m4,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8m4()
+{
+  size_t vl = vsetvlmax_e8m4();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m8_imm0()
+{
+  size_t vl = vsetvl_e8m8(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e8,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m8_imm31()
+{
+  size_t vl = vsetvl_e8m8(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e8,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e8m8_imm32()
+{
+  size_t vl = vsetvl_e8m8(32);
+  return vl;
+}
+size_t test_vsetvl_e8m8(size_t avl)
+{
+  size_t vl = vsetvl_e8m8(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e8,\s*m8,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e8m8()
+{
+  size_t vl = vsetvlmax_e8m8();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e8,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16mf4_imm0()
+{
+  size_t vl = vsetvl_e16mf4(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e16,\s*mf4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16mf4_imm31()
+{
+  size_t vl = vsetvl_e16mf4(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e16,\s*mf4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16mf4_imm32()
+{
+  size_t vl = vsetvl_e16mf4(32);
+  return vl;
+}
+size_t test_vsetvl_e16mf4(size_t avl)
+{
+  size_t vl = vsetvl_e16mf4(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e16,\s*mf4,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e16mf4()
+{
+  size_t vl = vsetvlmax_e16mf4();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e16,\s*mf4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16mf2_imm0()
+{
+  size_t vl = vsetvl_e16mf2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e16,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16mf2_imm31()
+{
+  size_t vl = vsetvl_e16mf2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e16,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16mf2_imm32()
+{
+  size_t vl = vsetvl_e16mf2(32);
+  return vl;
+}
+size_t test_vsetvl_e16mf2(size_t avl)
+{
+  size_t vl = vsetvl_e16mf2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e16,\s*mf2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e16mf2()
+{
+  size_t vl = vsetvlmax_e16mf2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e16,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m1_imm0()
+{
+  size_t vl = vsetvl_e16m1(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e16,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m1_imm31()
+{
+  size_t vl = vsetvl_e16m1(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e16,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m1_imm32()
+{
+  size_t vl = vsetvl_e16m1(32);
+  return vl;
+}
+size_t test_vsetvl_e16m1(size_t avl)
+{
+  size_t vl = vsetvl_e16m1(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e16,\s*m1,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e16m1()
+{
+  size_t vl = vsetvlmax_e16m1();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e16,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m2_imm0()
+{
+  size_t vl = vsetvl_e16m2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e16,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m2_imm31()
+{
+  size_t vl = vsetvl_e16m2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e16,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m2_imm32()
+{
+  size_t vl = vsetvl_e16m2(32);
+  return vl;
+}
+size_t test_vsetvl_e16m2(size_t avl)
+{
+  size_t vl = vsetvl_e16m2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e16,\s*m2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e16m2()
+{
+  size_t vl = vsetvlmax_e16m2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e16,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m4_imm0()
+{
+  size_t vl = vsetvl_e16m4(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e16,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m4_imm31()
+{
+  size_t vl = vsetvl_e16m4(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e16,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m4_imm32()
+{
+  size_t vl = vsetvl_e16m4(32);
+  return vl;
+}
+size_t test_vsetvl_e16m4(size_t avl)
+{
+  size_t vl = vsetvl_e16m4(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e16,\s*m4,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e16m4()
+{
+  size_t vl = vsetvlmax_e16m4();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e16,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m8_imm0()
+{
+  size_t vl = vsetvl_e16m8(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e16,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m8_imm31()
+{
+  size_t vl = vsetvl_e16m8(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e16,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e16m8_imm32()
+{
+  size_t vl = vsetvl_e16m8(32);
+  return vl;
+}
+size_t test_vsetvl_e16m8(size_t avl)
+{
+  size_t vl = vsetvl_e16m8(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e16,\s*m8,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e16m8()
+{
+  size_t vl = vsetvlmax_e16m8();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e16,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32mf2_imm0()
+{
+  size_t vl = vsetvl_e32mf2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e32,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32mf2_imm31()
+{
+  size_t vl = vsetvl_e32mf2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e32,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32mf2_imm32()
+{
+  size_t vl = vsetvl_e32mf2(32);
+  return vl;
+}
+size_t test_vsetvl_e32mf2(size_t avl)
+{
+  size_t vl = vsetvl_e32mf2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e32,\s*mf2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e32mf2()
+{
+  size_t vl = vsetvlmax_e32mf2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e32,\s*mf2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m1_imm0()
+{
+  size_t vl = vsetvl_e32m1(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e32,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m1_imm31()
+{
+  size_t vl = vsetvl_e32m1(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e32,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m1_imm32()
+{
+  size_t vl = vsetvl_e32m1(32);
+  return vl;
+}
+size_t test_vsetvl_e32m1(size_t avl)
+{
+  size_t vl = vsetvl_e32m1(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e32,\s*m1,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e32m1()
+{
+  size_t vl = vsetvlmax_e32m1();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e32,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m2_imm0()
+{
+  size_t vl = vsetvl_e32m2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e32,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m2_imm31()
+{
+  size_t vl = vsetvl_e32m2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e32,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m2_imm32()
+{
+  size_t vl = vsetvl_e32m2(32);
+  return vl;
+}
+size_t test_vsetvl_e32m2(size_t avl)
+{
+  size_t vl = vsetvl_e32m2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e32,\s*m2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e32m2()
+{
+  size_t vl = vsetvlmax_e32m2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e32,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m4_imm0()
+{
+  size_t vl = vsetvl_e32m4(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e32,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m4_imm31()
+{
+  size_t vl = vsetvl_e32m4(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e32,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m4_imm32()
+{
+  size_t vl = vsetvl_e32m4(32);
+  return vl;
+}
+size_t test_vsetvl_e32m4(size_t avl)
+{
+  size_t vl = vsetvl_e32m4(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e32,\s*m4,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e32m4()
+{
+  size_t vl = vsetvlmax_e32m4();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e32,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m8_imm0()
+{
+  size_t vl = vsetvl_e32m8(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e32,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m8_imm31()
+{
+  size_t vl = vsetvl_e32m8(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e32,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e32m8_imm32()
+{
+  size_t vl = vsetvl_e32m8(32);
+  return vl;
+}
+size_t test_vsetvl_e32m8(size_t avl)
+{
+  size_t vl = vsetvl_e32m8(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e32,\s*m8,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e32m8()
+{
+  size_t vl = vsetvlmax_e32m8();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e32,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m1_imm0()
+{
+  size_t vl = vsetvl_e64m1(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e64,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m1_imm31()
+{
+  size_t vl = vsetvl_e64m1(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e64,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m1_imm32()
+{
+  size_t vl = vsetvl_e64m1(32);
+  return vl;
+}
+size_t test_vsetvl_e64m1(size_t avl)
+{
+  size_t vl = vsetvl_e64m1(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e64,\s*m1,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e64m1()
+{
+  size_t vl = vsetvlmax_e64m1();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e64,\s*m1,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m2_imm0()
+{
+  size_t vl = vsetvl_e64m2(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e64,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m2_imm31()
+{
+  size_t vl = vsetvl_e64m2(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e64,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m2_imm32()
+{
+  size_t vl = vsetvl_e64m2(32);
+  return vl;
+}
+size_t test_vsetvl_e64m2(size_t avl)
+{
+  size_t vl = vsetvl_e64m2(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e64,\s*m2,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e64m2()
+{
+  size_t vl = vsetvlmax_e64m2();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e64,\s*m2,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m4_imm0()
+{
+  size_t vl = vsetvl_e64m4(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e64,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m4_imm31()
+{
+  size_t vl = vsetvl_e64m4(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e64,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m4_imm32()
+{
+  size_t vl = vsetvl_e64m4(32);
+  return vl;
+}
+size_t test_vsetvl_e64m4(size_t avl)
+{
+  size_t vl = vsetvl_e64m4(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e64,\s*m4,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e64m4()
+{
+  size_t vl = vsetvlmax_e64m4();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e64,\s*m4,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m8_imm0()
+{
+  size_t vl = vsetvl_e64m8(0);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*0,\s*e64,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m8_imm31()
+{
+  size_t vl = vsetvl_e64m8(31);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetivli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*31,\s*e64,\s*m8,\s*ta,\s*mu} 1 } } */
+
+size_t test_vsetvl_e64m8_imm32()
+{
+  size_t vl = vsetvl_e64m8(32);
+  return vl;
+}
+size_t test_vsetvl_e64m8(size_t avl)
+{
+  size_t vl = vsetvl_e64m8(avl);
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*e64,\s*m8,\s*ta,\s*mu} 2 } } */
+
+size_t test_vsetvlmax_e64m8()
+{
+  size_t vl = vsetvlmax_e64m8();
+  return vl;
+}
+/* { dg-final { scan-assembler-times {vsetvli\s+(?:ra|[sgtf]p|t[0-6]|s[0-9]|s10|s11|a[0-7]),\s*zero,\s*e64,\s*m8,\s*ta,\s*mu} 1 } } */
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 06/21]   Add insert-vsetvl pass
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (4 preceding siblings ...)
  2022-05-31  8:49 ` [PATCH 05/21] Add RVV configuration intrinsic juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 07/21] Add register spilling support juzhe.zhong
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config.gcc: Add riscv-insert-vsetvl.o extra_objs for RVV support.
        * config/riscv/constraints.md (Ws5): New constraint.
        * config/riscv/predicates.md (p_reg_or_const_csr_operand): New predicate.
        (vector_reg_or_const0_operand): New predicate.
        (vector_move_operand): New predicate.
        (reg_or_mem_operand): New predicate.
        (reg_or_simm5_operand): New predicate.
        (reg_or_const_int_operand): New predicate.
        * config/riscv/riscv-opts.h (enum vsew_field_enum): New enum.
        * config/riscv/riscv-passes.def (INSERT_PASS_AFTER): Run insert vsetvl pass after pass_split_all_insns.
        (INSERT_PASS_BEFORE): Run insert vsetvl pass before pass_sched2.
        * config/riscv/riscv-protos.h (make_pass_insert_vsetvl): New function.
        (make_pass_insert_vsetvl2): New function.
        (rvv_mask_mode_p): New function.
        (rvv_classify_vsew_field): New function.
        (rvv_gen_policy): New function.
        (rvv_get_mask_mode): New function.
        (rvv_translate_attr_mode): New function.
        * config/riscv/riscv-vector-builtins-iterators.def (V): New iterator.
        (VF): New iterator.
        (VB): New iterator.
        (VFULL): New iterator.
        (VPARTIAL): New iterator.
        (V64BITI): New iterator.
        (VM): New iterator.
        (VSUB): New iterator.
        (VDI_TO_VSI): New iterator.
        (VDI_TO_VSI_VM): New iterator.
        * config/riscv/riscv-vector.cc (enum vsew_field_enum): New enum.
        (rvv_classify_vsew_field): New function.
        (rvv_gen_policy): New function.
        (rvv_translate_attr_mode): New function.
        (TRANSLATE_VECTOR_MODE): New macro define.
        (classify_vtype_field): New function.
        (get_lmulx8): New function.
        (force_reg_for_over_uimm): New function.
        (gen_vlx2): New function.
        (emit_int64_to_vector_32bit): New function.
        (imm32_p): New function.
        (imm_p): New function.
        (gen_3): New function.
        (gen_4): New function.
        (gen_5): New function.
        (gen_6): New function.
        (gen_7): New function.
        (enum GEN_CLASS): New enum.
        (modify_operands): New function.
        (emit_op5_vmv_v_x): New function.
        (emit_op5): New function.
        * config/riscv/riscv-vector.h (riscv_vector_mode_p): New function. 
        (rvv_legitimate_poly_int_p): New function.
        (rvv_offset_temporaries): New function.
        (rvv_classify_vlmul_field): New function.
        (rvv_parse_vsew_field): New function.
        (rvv_parse_vlmul_field): New function.
        (rvv_parse_vta_field): New function.
        (rvv_parse_vma_field): New function.
        (rvv_regsize): New function.
        (rvv_get_mask_mode): New function.
        * config/riscv/riscv.md: Add RVV modes.
        * config/riscv/t-riscv: New object.
        * config/riscv/vector-iterators.md: New iterators and attributes.
        * config/riscv/vector.md (@vec_duplicate<mode>): New pattern.
        (@vle<mode>): New pattern.
        (@vse<mode>): New pattern.
        (@vlm<mode>): New pattern.
        (@vsm<mode>): New pattern.
        (@v<vxoptab><mode>_v_x): New pattern.
        (@vmv<mode>_v_x_internal): New pattern.
        (@vmv<mode>_v_x_32bit): New pattern.
        (@vfmv<mode>_v_f): New pattern.
        (@vmerge<mode>_vxm_internal): New pattern.
        * config/riscv/riscv-insert-vsetvl.cc: New file.
        
---
 gcc/config.gcc                                |    2 +-
 gcc/config/riscv/constraints.md               |    5 +
 gcc/config/riscv/predicates.md                |   31 +
 gcc/config/riscv/riscv-insert-vsetvl.cc       | 2312 +++++++++++++++++
 gcc/config/riscv/riscv-opts.h                 |   12 +
 gcc/config/riscv/riscv-passes.def             |    2 +
 gcc/config/riscv/riscv-protos.h               |   19 +
 .../riscv/riscv-vector-builtins-iterators.def |  236 ++
 gcc/config/riscv/riscv-vector.cc              |  368 +++
 gcc/config/riscv/riscv-vector.h               |   10 -
 gcc/config/riscv/riscv.md                     |   67 +-
 gcc/config/riscv/t-riscv                      |    4 +
 gcc/config/riscv/vector-iterators.md          |  129 +-
 gcc/config/riscv/vector.md                    |  235 +-
 14 files changed, 3417 insertions(+), 15 deletions(-)
 create mode 100644 gcc/config/riscv/riscv-insert-vsetvl.cc

diff --git a/gcc/config.gcc b/gcc/config.gcc
index 042a7a17737..1592e344531 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -518,7 +518,7 @@ pru-*-*)
 riscv*)
 	cpu_type=riscv
 	extra_headers="riscv_vector.h"
-	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-vector.o riscv-vector-builtins-functions.o riscv-vector-builtins.o"
+	extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-vector.o riscv-vector-builtins-functions.o riscv-vector-builtins.o riscv-insert-vsetvl.o"
 	d_target_objs="riscv-d.o"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-builtins.cc \$(srcdir)/config/riscv/riscv-vector-builtins.cc"
 	target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins-functions.cc"
diff --git a/gcc/config/riscv/constraints.md b/gcc/config/riscv/constraints.md
index 7fd61a04216..114878130bb 100644
--- a/gcc/config/riscv/constraints.md
+++ b/gcc/config/riscv/constraints.md
@@ -97,3 +97,8 @@
   (and (match_code "const_poly_int")
        (match_test "CONST_POLY_INT_COEFFS (op)[0] == UNITS_PER_V_REG.coeffs[0]
         && CONST_POLY_INT_COEFFS (op)[1] == UNITS_PER_V_REG.coeffs[1]")))
+
+(define_constraint "Ws5"
+  "Signed immediate 5-bit value"
+  (and (match_code "const_int")
+       (match_test "IN_RANGE (INTVAL (op), -16, 15)")))
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 6328cfff367..7a101676538 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -246,3 +246,34 @@
 (define_predicate "imm5_operand"
   (and (match_code "const_int")
        (match_test "INTVAL (op) < 5")))
+
+;; Vector Predicates.
+
+(define_special_predicate "p_reg_or_const_csr_operand"
+  (match_code "reg, subreg, const_int")
+{
+  if (CONST_INT_P (op))
+    return satisfies_constraint_K (op);
+  return GET_MODE (op) == Pmode;
+})
+
+(define_predicate "vector_reg_or_const0_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_test "op == const0_rtx && !VECTOR_MODE_P (GET_MODE (op))")))
+
+(define_predicate "vector_move_operand"
+  (ior (match_operand 0 "nonimmediate_operand")
+      (match_code "const_vector")))
+
+(define_predicate "reg_or_mem_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_operand 0 "memory_operand")))
+
+(define_predicate "reg_or_simm5_operand"
+  (ior (match_operand 0 "register_operand")
+       (and (match_operand 0 "const_int_operand")
+	    (match_test "!FLOAT_MODE_P (GET_MODE (op)) && IN_RANGE (INTVAL (op), -16, 15)"))))
+
+(define_predicate "reg_or_const_int_operand"
+  (ior (match_operand 0 "register_operand")
+       (match_code "const_wide_int, const_int")))
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-insert-vsetvl.cc b/gcc/config/riscv/riscv-insert-vsetvl.cc
new file mode 100644
index 00000000000..939927c5775
--- /dev/null
+++ b/gcc/config/riscv/riscv-insert-vsetvl.cc
@@ -0,0 +1,2312 @@
+/* Insert-vsetvli pass for RISC-V 'V' Extension for GNU compiler.
+   Copyright(C) 2022-2022 Free Software Foundation, Inc.
+   Contributed by Juzhe Zhong (juzhe.zhong@rivai.ai), RiVAI Technologies Ltd.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or(at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#define IN_TARGET_CODE 1
+#define INCLUDE_ALGORITHM 1
+#define INCLUDE_FUNCTIONAL 1
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "rtl.h"
+#include "backend.h"
+#include "regs.h"
+#include "target.h"
+#include "memmodel.h"
+#include "emit-rtl.h"
+#include "df.h"
+#include "rtl-ssa.h"
+#include "predict.h"
+#include "insn-config.h"
+#include "insn-attr.h"
+#include "recog.h"
+#include "cfgrtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "gimple-iterator.h"
+#include "gimple-walk.h"
+#include "langhooks.h"
+#include "tree-iterator.h"
+#include "gimplify.h"
+#include "explow.h"
+#include "cfgcleanup.h"
+
+#include <map>
+#include <vector>
+#include <queue>
+#include <set>
+#include <tuple>
+
+#include "riscv-protos.h"
+#include "riscv-vector-builtins-functions.h"
+#include "riscv-vector-builtins.h"
+
+using namespace riscv_vector;
+using namespace rtl_ssa;
+
+/*  This pass is to insert vsetvli instructions for RVV instructions that depend on vtype or vl.
+    Because Clang+LLVM compiler has the mature pass to insert vsetvli instructions and works well,
+    algorithm follows the Clang+LLVM compiler Pass.
+
+    This pass consists of 3 phases:
+
+    Phase 1 collects how each basic block affects VL/VTYPE.
+
+    Phase 2 uses the information from phase 1 to do a data flow analysis to
+    propagate the VL/VTYPE changes through the function. This gives us the
+    VL/VTYPE at the start of each basic block.
+
+    Phase 3 inserts vsetvli instructions in each basic block. Information from
+    phase 2 is used to prevent inserting a vsetvli before the first vector
+    instruction in the block if possible.  */
+
+enum state_enum
+{
+  STATE_UNINITIALIZED,
+  STATE_KNOWN,
+  STATE_UNKNOWN
+};
+
+enum replace_enum
+{
+  REPLACE_VL,
+  REPLACE_VTYPE
+};
+
+enum clobber_pat_enum
+{
+  MOV_CLOBBER_MEM_REG,
+  MOV_CLOBBER_REG_MEM,
+  MOV_CLOBBER_REG_REG,
+  MOV_CLOBBER_REG_CONST,
+  OTHERS
+};
+
+/* Helper functions. */
+
+static unsigned int
+get_policy_offset (rtx_insn *insn)
+{
+  unsigned int offset = 1;
+  if (GET_CODE (PATTERN (insn)) == PARALLEL)
+    {
+      if (get_attr_type (insn) == TYPE_VCMP)
+        offset = 2;
+    }
+  return offset;
+}
+
+static unsigned int
+get_vl_offset (rtx_insn *insn)
+{
+  unsigned int offset = 2;
+  if (GET_CODE (PATTERN (insn)) == PARALLEL)
+    {
+      if (get_attr_type (insn) == TYPE_VCMP)
+        offset = 3;
+    }
+  return offset;
+}
+
+static enum clobber_pat_enum
+recog_clobber_vl_vtype (rtx_insn *insn)
+{
+  /*
+   [(set (match_operand 0 "reg_or_mem_operand" "=vr,m,vr")
+         (match_operand 1 "reg_or_mem_operand" "m,vr,vr"))
+         (clobber (match_scratch:SI 2 "=&r,&r,X"))
+         (clobber (reg:SI VL_REGNUM))
+         (clobber (reg:SI VTYPE_REGNUM))]
+  */
+  rtx pat = PATTERN (insn);
+  if (GET_CODE (pat) != PARALLEL)
+    return OTHERS;
+
+  unsigned int len = XVECLEN (pat, 0);
+  if (len < 3)
+    return OTHERS;
+
+  if (!rtx_equal_p (
+          XVECEXP (pat, 0, len - 1),
+          gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (SImode, VTYPE_REGNUM))))
+    return OTHERS;
+
+  if (!rtx_equal_p (XVECEXP (pat, 0, len - 2),
+                    gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (SImode, VL_REGNUM))))
+    return OTHERS;
+
+  extract_insn_cached (insn);
+  rtx mov_pat = gen_rtx_SET (recog_data.operand[0], recog_data.operand[1]);
+  if (!rtx_equal_p (XVECEXP (pat, 0, 0), mov_pat))
+    return OTHERS;
+  
+  if (MEM_P (recog_data.operand[0]))
+    return MOV_CLOBBER_MEM_REG;
+ 
+  if (MEM_P (recog_data.operand[1]))
+    return MOV_CLOBBER_REG_MEM;
+  
+  if (REG_P (recog_data.operand[1]))
+    return MOV_CLOBBER_REG_REG;
+  
+  if (CONST_VECTOR_P (recog_data.operand[1]))
+    return MOV_CLOBBER_REG_CONST;
+      
+  return OTHERS;
+}
+
+static bool
+is_vector_config_instr (rtx_insn *insn)
+{
+  return insn && INSN_P (insn) && recog_memoized (insn) >= 0 &&
+         get_attr_type (insn) == TYPE_VSETVL;
+}
+
+/// Return true if this is 'vsetvli x0, x0, vtype' which preserves
+/// VL and only sets VTYPE.
+static bool
+is_vl_preserving_config (rtx_insn *insn)
+{
+  if (is_vector_config_instr (insn))
+    {
+      extract_insn_cached (insn);
+      return recog_data.n_operands == 1;
+    }
+  return false;
+}
+
+static bool
+rvv_insn_p (rtx_insn *insn, rtx *src)
+{
+  *src = NULL_RTX;
+  if (!insn)
+    return false;
+  
+  if (!INSN_P (insn))
+    return false;
+  
+  if (recog_memoized (insn) < 0)
+    return false;
+  
+  if (!rvv_mode_p (rvv_translate_attr_mode (insn)))
+    return false;
+  
+  if (recog_clobber_vl_vtype (insn) != OTHERS)
+    {
+      if (reload_completed)
+        {
+          *src = SET_SRC (XVECEXP (PATTERN (insn), 0, 0));
+          return true;
+        }
+      else
+        return false;
+    }
+  
+  if (GET_CODE (PATTERN (insn)) == PARALLEL)
+    *src = SET_SRC (XVECEXP (PATTERN (insn), 0, 0));
+
+  if (GET_CODE (PATTERN (insn)) == SET)
+    *src = SET_SRC (PATTERN (insn));
+
+  if (!*src)
+    return false;
+
+  if (GET_CODE (*src) != UNSPEC)
+    return false;
+
+  if (XINT (*src, 1) != UNSPEC_RVV)
+    return false;
+
+  return true;
+}
+
+static bool
+use_vl_p (rtx_insn *insn)
+{
+  rtx src = NULL_RTX;
+  if (!rvv_insn_p (insn, &src))
+    return false;
+  
+  if (recog_clobber_vl_vtype (insn) != OTHERS)
+    return true;
+
+  if (rtx_equal_p (XVECEXP (src, 0, XVECLEN (src, 0) - 1),
+                   gen_rtx_REG (SImode, VL_REGNUM)))
+    return true;
+
+  if (XVECLEN (src, 0) > 1 &&
+      rtx_equal_p (XVECEXP (src, 0, XVECLEN (src, 0) - 2),
+                   gen_rtx_REG (SImode, VL_REGNUM)))
+    return true;
+
+  return false;
+}
+
+static bool
+use_vtype_p (rtx_insn *insn)
+{ 
+  rtx src = NULL_RTX;
+  if (!rvv_insn_p (insn, &src))
+    return false;
+  
+  if (recog_clobber_vl_vtype (insn) != OTHERS)
+    return true;
+    
+  if (rtx_equal_p (XVECEXP (src, 0, XVECLEN (src, 0) - 1),
+                   gen_rtx_REG (SImode, VTYPE_REGNUM)))
+    return true;
+
+  return false;
+}
+
+static bool
+use_vlmax_p (rtx_insn *insn)
+{
+  rtx src = NULL_RTX;
+  unsigned int length = 0;
+  
+  if (recog_clobber_vl_vtype (insn) != OTHERS)
+    return true;
+    
+  if (rvv_insn_p (insn, &src))
+    length = XVECLEN (src, 0);
+  
+  if (length < 2)
+    return false;
+
+  if (rtx_equal_p (XVECEXP (src, 0, length - 1),
+                   gen_rtx_REG (SImode, VL_REGNUM)))
+    return rtx_equal_p (XVECEXP (src, 0, length - 2),
+                        gen_rtx_REG (Pmode, X0_REGNUM));
+
+  if (length < 3)
+    return false;
+
+  return rtx_equal_p (XVECEXP (src, 0, length - 3),
+                      gen_rtx_REG (Pmode, X0_REGNUM));
+}
+
+static bool
+need_vsetvli_p (rtx_insn *insn)
+{
+  rtx src = NULL_RTX;
+  if (!rvv_insn_p (insn, &src))
+    return false;
+  return true;
+}
+
+static void
+replace_op (rtx_insn *insn, rtx x, unsigned int replace)
+{
+  extract_insn_cached (insn);
+  if (replace == REPLACE_VTYPE)
+    validate_change (insn, recog_data.operand_loc[recog_data.n_operands - 1], x, false);
+
+  if (replace == REPLACE_VL && !use_vlmax_p (insn))
+    {
+      unsigned int offset = get_vl_offset (insn);
+      validate_change (insn,
+                       recog_data.operand_loc[recog_data.n_operands - offset],
+                       x, false);
+    }
+}
+
+static bool
+update_vl_vtype_p (rtx_insn *insn)
+{
+  if (insn && NONDEBUG_INSN_P (insn))
+    {
+      if (recog_memoized (insn) >= 0 &&
+          (get_attr_type (insn) == TYPE_VLEFF))
+        {
+          extract_insn_cached (insn);
+          if (INTVAL (recog_data.operand[recog_data.n_operands - 1]) ==
+              DO_NOT_UPDATE_VL_VTYPE)
+            return false;
+          return true;
+        }
+      if (CALL_P (insn))
+        return true;
+      if (PATTERN (insn) && (GET_CODE (PATTERN (insn)) == ASM_INPUT ||
+                             GET_CODE (PATTERN (insn)) == ASM_OPERANDS ||
+                             asm_noperands (PATTERN (insn)) >= 0))
+        return true;
+    }
+  return false;
+}
+
+static rtx
+get_avl_source (rtx avl, rtx_insn *rtl)
+{
+  if (!rtl || !avl)
+    return NULL_RTX;
+
+  if (optimize < 2)
+    return NULL_RTX;
+
+  insn_info *next;
+  rtx avl_source = NULL_RTX;
+
+  if (!REG_P (avl))
+    return NULL_RTX;
+
+  for (insn_info *insn = crtl->ssa->first_insn (); insn; insn = next)
+    {
+      next = insn->next_any_insn ();
+      if (insn->rtl () == rtl)
+        {
+          resource_info resource{GET_MODE (avl), REGNO (avl)};
+          def_lookup dl = crtl->ssa->find_def (resource, insn);
+          def_info *def = dl.prev_def (insn);
+
+          if (!def)
+            return NULL_RTX;
+
+          if (!is_a<set_info *> (def))
+            return NULL_RTX;
+
+          insn_info *def_insn = def->insn ();
+
+          if (!def_insn)
+            return NULL_RTX;
+          rtx_insn *def_rtl = def_insn->rtl ();
+
+          if (!def_rtl)
+            return NULL_RTX;
+
+          if (INSN_P (def_rtl) && single_set (def_rtl))
+            {
+              avl_source = SET_SRC (single_set (def_rtl));
+              break;
+            }
+        }
+    }
+
+  return avl_source;
+}
+
+static machine_mode
+vsew_to_int_mode (unsigned vsew)
+{
+  return vsew == 0 ? QImode : vsew == 1 ? HImode : vsew == 2 ? SImode : DImode;
+}
+
+class vinfo
+{
+private:
+  state_enum state;
+  // Fields from VTYPE.
+  uint8_t vma : 1;
+  uint8_t vta : 1;
+  uint8_t vsew : 3;
+  uint8_t vlmul : 3;
+  uint8_t all_maskop_p : 1;
+  uint8_t store_p : 1;
+  uint8_t sew_lmul_ratio_only_p : 1;
+  uint8_t scalar_move_p : 1;
+  rtx avl;
+  rtx avl_source;
+
+public:
+  vinfo ()
+      : state (STATE_UNINITIALIZED), vma (false), vta (false), vsew (0),
+        vlmul (0), all_maskop_p (false), store_p (false), sew_lmul_ratio_only_p (false),
+        scalar_move_p (false), avl (NULL_RTX), avl_source (NULL_RTX)
+  {
+  }
+
+  ~vinfo () {}
+  
+  static vinfo
+  get_unknown ()
+  {
+    vinfo info;
+    info.set_unknown ();
+    return info;
+  }
+
+  bool
+  valid_p () const
+  {
+    return state != STATE_UNINITIALIZED;
+  }
+  void
+  set_unknown ()
+  {
+    state = STATE_UNKNOWN;
+  }
+  bool
+  unknown_p () const
+  {
+    return state == STATE_UNKNOWN;
+  }
+
+  bool
+  known_p () const
+  {
+    return state == STATE_KNOWN;
+  }
+
+  void
+  set_avl (rtx op)
+  {
+    avl = op;
+    state = STATE_KNOWN;
+  }
+
+  void
+  set_avl_source (rtx op)
+  {
+    avl_source = op;
+  }
+
+  bool
+  avl_const_p () const
+  {
+    return get_avl () && CONST_SCALAR_INT_P (get_avl ());
+  }
+  
+  bool
+  avl_reg_p () const
+  {
+    return get_avl () && REG_P (get_avl ());
+  }
+
+  rtx
+  get_avl () const
+  {
+    gcc_assert (known_p ());
+    return avl;
+  }
+
+  bool
+  has_zero_avl () const
+  {
+    if (!known_p ())
+      return false;
+    if (get_avl () == NULL_RTX)
+      return false;
+    if (avl_const_p ())
+      return INTVAL (get_avl ()) == 0; 
+    return false;
+  }
+
+  bool
+  has_nonzero_avl () const
+  {
+    if (!known_p ())
+      return false;
+    if (get_avl () == NULL_RTX)
+      return false;
+    if (avl_const_p ())
+      return INTVAL (get_avl ()) > 0; 
+    if (avl_reg_p ())
+      return rtx_equal_p (get_avl (), gen_rtx_REG (Pmode, X0_REGNUM));
+    return false;
+  }
+
+  rtx
+  get_avl_source () const
+  {
+    gcc_assert (known_p ());
+    return avl_source;
+  }
+
+  unsigned int
+  get_vsew () const
+  {
+    return vsew;
+  }
+  
+  enum vlmul_field_enum
+  get_vlmul () const
+  {
+    return (enum vlmul_field_enum) vlmul;
+  }
+  
+  unsigned int
+  get_vta () const
+  {
+    return vta;
+  }
+  
+  unsigned int
+  get_vma () const
+  {
+    return vma;
+  }
+
+  uint8_t
+  get_store_p () const
+  {
+    return store_p;
+  }
+  
+  bool
+  compare_vl (const vinfo &info) const
+  {
+    /* Optimize the code as follows:
+       if RVV is a fixed vector-length = 128bit.
+       vsetvli a5, 16, e8, m1......
+       .........
+       vsetvli a5, zero, e8, m1.....(no need)
+    */
+    if (!get_avl () || !info.get_avl ())
+      return false;
+
+    if (REG_P (get_avl ()) && REGNO (get_avl ()) == X0_REGNUM)
+      {
+        unsigned int vsew = info.get_vsew ();
+        machine_mode inner = vsew_to_int_mode (vsew);
+        machine_mode mode = riscv_vector::vector_builtin_mode (
+            as_a<scalar_mode> (inner), info.get_vlmul ());
+        if (CONST_SCALAR_INT_P (info.get_avl ()))
+          {
+            if (GET_MODE_NUNITS (mode).is_constant () &&
+                INTVAL (info.get_avl ()) ==
+                    GET_MODE_NUNITS (mode).to_constant ())
+              return true;
+          }
+
+        if (REG_P (info.get_avl ()))
+          {
+            if (info.get_avl_source ())
+              {
+                if (CONST_SCALAR_INT_P (info.get_avl_source ()) &&
+                    GET_MODE_NUNITS (mode).is_constant () &&
+                    INTVAL (info.get_avl_source ()) ==
+                        GET_MODE_NUNITS (mode).to_constant ())
+                  return true;
+                if (CONST_POLY_INT_P (info.get_avl_source ()) &&
+                    !GET_MODE_NUNITS (mode).is_constant () &&
+                    known_eq (rtx_to_poly_int64 (info.get_avl_source ()),
+                              GET_MODE_NUNITS (mode)))
+                  return true;
+              }
+          }
+      }
+
+    return false;
+  }
+
+  bool
+  avl_equal_p (const vinfo &other) const
+  {
+    gcc_assert (valid_p () && other.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!unknown_p () && !other.unknown_p () &&
+                "Can't compare AVL in unknown state.");
+
+    if (compare_vl (other))
+      return true;
+      
+    if (other.compare_vl (*this))
+      return true;
+    
+    if (rtx_equal_p (get_avl (), other.get_avl ()))
+      return true;
+
+    if (!get_avl_source () && !other.get_avl_source ())
+      return false;
+
+    if (get_avl_source () && rtx_equal_p (get_avl_source (), other.get_avl ()))
+      return true;
+
+    if (other.get_avl_source () &&
+        rtx_equal_p (other.get_avl_source (), get_avl ()))
+      return true;
+
+    return rtx_equal_p (get_avl_source (), other.get_avl_source ());
+  }
+  
+  void
+  set_vma (unsigned int vma)
+  {
+    gcc_assert (valid_p () && !unknown_p () &&
+                "Can't set VTYPE for uninitialized or unknown.");
+    vma = vma;
+  }
+  
+  void
+  set_vta (unsigned int vta)
+  {
+    gcc_assert (valid_p () && !unknown_p () &&
+                "Can't set VTYPE for uninitialized or unknown.");
+    vta = vta;
+  }
+  
+  void
+  set_vtype (unsigned int vtype)
+  {
+    gcc_assert (valid_p () && !unknown_p () &&
+                "Can't set VTYPE for uninitialized or unknown.");
+    vma = rvv_parse_vma_field (vtype);
+    vta = rvv_parse_vta_field (vtype);
+    vsew = rvv_parse_vsew_field (vtype);
+    vlmul = rvv_parse_vlmul_field (vtype);
+  }
+
+  void
+  set_vtype (unsigned vl, unsigned vs, bool vt, bool vm, bool m_p,
+             bool st_p, bool is_scalar_move_op)
+  {
+    gcc_assert (valid_p () && !unknown_p () &&
+                "Can't set VTYPE for uninitialized or unknown.");
+    vma = vm;
+    vta = vt;
+    vsew = vs;
+    vlmul = vl;
+    all_maskop_p = m_p;
+    store_p = st_p;
+    scalar_move_p = is_scalar_move_op;
+  }
+
+  // Encode VTYPE into the binary format used by the the VSETVLI instruction
+  // which is used by our MC layer representation.
+  //
+  // Bits | Name       | Description
+  // -----+------------+------------------------------------------------
+  // 7    | vma        | Vector mask agnostic
+  // 6    | vta        | Vector tail agnostic
+  // 5:3  | vsew[2:0]  | Standard element width(SEW) setting
+  // 2:0  | vlmul[2:0] | Vector register group multiplier(LMUL) setting
+  unsigned
+  encode_vtype () const
+  {
+    gcc_assert (valid_p () && !unknown_p () && !sew_lmul_ratio_only_p &&
+                "Can't set VTYPE for uninitialized or unknown.");
+    gcc_assert (vsew >= 0 && vsew <= 7 && "Invalid SEW.");
+    unsigned vtype = (vsew << 3) | (vlmul & 0x7);
+    if (vta)
+      vtype |= 0x40;
+    if (vma)
+      vtype |= 0x80;
+
+    return vtype;
+  }
+
+  bool
+  get_sew_lmul_ratio_only_p () const
+  {
+    return sew_lmul_ratio_only_p;
+  }
+
+  bool
+  sew_equal_p (const vinfo &other) const
+  {
+    gcc_assert (valid_p () && other.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!unknown_p () && !other.unknown_p () &&
+                "Can't compare VTYPE in unknown state.");
+    gcc_assert (!sew_lmul_ratio_only_p && !other.sew_lmul_ratio_only_p &&
+                "Can't compare when only LMUL/SEW ratio is valid.");
+    return vsew == other.vsew;
+  }
+
+  bool
+  vtype_equal_p (const vinfo &other) const
+  {
+    gcc_assert (valid_p () && other.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!unknown_p () && !other.unknown_p () &&
+                "Can't compare VTYPE in unknown state.");
+    gcc_assert (!sew_lmul_ratio_only_p && !other.sew_lmul_ratio_only_p &&
+                "Can't compare when only LMUL/SEW ratio is valid.");
+    return std::tie (vma, vta, vsew, vlmul) ==
+           std::tie (other.vma, other.vta, other.vsew, other.vlmul);
+  }
+
+  bool
+  policy_equal_p (const vinfo &other) const
+  {
+    gcc_assert (valid_p () && other.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!unknown_p () && !other.unknown_p () &&
+                "Can't compare VTYPE in unknown state.");
+    
+    return vta == other.vta && vma == other.vma;
+  }
+
+  unsigned
+  calc_sew_lmul_ratio (unsigned int vsew_arg, unsigned int vlmul_arg) const
+  {
+    gcc_assert (valid_p () && !unknown_p () &&
+                "Can't use VTYPE for uninitialized or unknown.");
+
+    unsigned lmul;
+    unsigned sew;
+    bool fractional;
+
+    switch (vsew_arg)
+      {
+      default:
+        gcc_unreachable ();
+      case 0:
+        sew = 8;
+        break;
+      case 1:
+        sew = 16;
+        break;
+      case 2:
+        sew = 32;
+        break;
+      case 3:
+        sew = 64;
+        break;
+      case 4:
+        sew = 128;
+        break;
+      case 5:
+        sew = 256;
+        break;
+      case 6:
+        sew = 512;
+        break;
+      case 7:
+        sew = 1024;
+        break;
+      }
+
+    switch (vlmul_arg)
+      {
+      default:
+        gcc_unreachable ();
+      case 0:
+        lmul = 1;
+        fractional = false;
+        break;
+      case 1:
+        lmul = 2;
+        fractional = false;
+        break;
+      case 2:
+        lmul = 4;
+        fractional = false;
+        break;
+      case 3:
+        lmul = 8;
+        fractional = false;
+        break;
+      case 5:
+        lmul = 8;
+        fractional = true;
+        break;
+      case 6:
+        lmul = 4;
+        fractional = true;
+        break;
+      case 7:
+        lmul = 2;
+        fractional = true;
+        break;
+      }
+
+    gcc_assert (sew >= 8 && "Unexpected SEW value.");
+    unsigned int sew_mul_ratio = fractional ? sew * lmul : sew / lmul;
+
+    return sew_mul_ratio;
+  }
+
+  unsigned
+  calc_sew_lmul_ratio () const
+  {
+    return calc_sew_lmul_ratio (vsew, vlmul);
+  }
+
+  // Check if the VTYPE for these two VSETVLI Infos produce the same VLMAX.
+  bool
+  vlmax_equal_p (const vinfo &other) const
+  {
+    gcc_assert (valid_p () && other.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!unknown_p () && !other.unknown_p () &&
+                "Can't compare AVL in unknown state.");
+    return calc_sew_lmul_ratio () == other.calc_sew_lmul_ratio ();
+  }
+
+  bool
+  compatible_vtype_p (const vinfo &info) const
+  {
+    // Simple case, see if full VTYPE matches.
+    if (vtype_equal_p (info))
+      return true;
+
+    // If this is a mask reg operation, it only cares about VLMAX.
+    // FIXME: Mask reg operations are probably ok if "this" VLMAX is larger
+    // than "InstrInfo".
+    // FIXME: The policy bits can probably be ignored for mask reg operations.
+    if (info.all_maskop_p && vlmax_equal_p (info) && vta == info.vta &&
+        vma == info.vma)
+      return true;
+
+    return false;
+  }
+
+  // Determine whether the vector instructions requirements represented by
+  // InstrInfo are compatible with the previous vsetvli instruction represented
+  // by this.
+  bool
+  compatible_p (const vinfo &require) const
+  {
+    gcc_assert (valid_p () && require.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!require.sew_lmul_ratio_only_p &&
+                "Expected a valid VTYPE for instruction.");
+
+    // Nothing is compatible with Unknown.
+    if (unknown_p () || require.unknown_p ())
+      return false;
+
+    // If only our VLMAX ratio is valid, then this isn't compatible.
+    if (sew_lmul_ratio_only_p)
+      return false;
+
+    // If the instruction doesn't need an AVLReg and the SEW matches, consider
+    // it compatible.
+    if (require.known_p () && require.avl == NULL_RTX
+      && vsew == require.vsew)
+      return true;
+    
+    // For vmv.s.x and vfmv.s.f, there is only two behaviors, VL = 0 and VL > 0.
+    // So it's compatible when we could make sure that both VL be the same
+    // situation.
+    if (require.scalar_move_p && require.get_avl () &&
+        CONST_SCALAR_INT_P (require.get_avl ()) &&
+        ((has_nonzero_avl () && require.has_nonzero_avl ()) ||
+         (has_zero_avl () && require.has_zero_avl ())) &&
+        sew_equal_p (require) && policy_equal_p (require))
+      return true;
+    
+    // The AVL must match.
+    if (!avl_equal_p (require))
+      return false;
+    
+    if (compatible_vtype_p (require))
+      return true;
+      
+    // Store instructions don't use the policy fields.
+    // TODO: Move into hasCompatibleVTYPE?
+    if (require.store_p && vlmul == require.vlmul && vsew == require.vsew)
+      return true;
+      
+    // Anything else is not compatible.
+    return false;
+  }
+
+  bool
+  load_store_compatible_p (unsigned vsew_arg, const vinfo &info) const
+  {
+    gcc_assert (valid_p () && info.valid_p () &&
+                "Can't compare invalid VSETVLI Infos.");
+    gcc_assert (!info.sew_lmul_ratio_only_p &&
+                "Expected a valid VTYPE for instruction.");
+    gcc_assert (vsew_arg == info.vsew && "Mismatched EEW/SEW for store.");
+    
+    if (unknown_p () || get_sew_lmul_ratio_only_p ())
+      return false;
+
+    if (!avl_equal_p (info))
+      return false;
+
+    // Stores can ignore the tail and mask policies.
+    if (!info.store_p && (vta != info.vta || vma != info.vma))
+      return false;
+
+    return calc_sew_lmul_ratio () == calc_sew_lmul_ratio (vsew_arg, info.vlmul);
+  }
+
+  bool
+  operator== (const vinfo &other) const
+  {
+    // Uninitialized is only equal to another Uninitialized.
+    if (!valid_p ())
+      return !other.valid_p ();
+
+    if (!other.valid_p ())
+      return !valid_p ();
+
+    // Unknown is only equal to another Unknown.
+    if (unknown_p ())
+      return other.unknown_p ();
+
+    if (other.unknown_p ())
+      return unknown_p ();
+
+    if (!avl_equal_p (other))
+      return false;
+
+    // If only the VLMAX is valid, check that it is the same.
+    if (sew_lmul_ratio_only_p && other.sew_lmul_ratio_only_p)
+      return vlmax_equal_p (other);
+
+    // If the full VTYPE is valid, check that it is the same.
+    if (!sew_lmul_ratio_only_p && !other.sew_lmul_ratio_only_p)
+      return vtype_equal_p (other);
+
+    // If the sew_lmul_ratio_only bits are different, then they aren't equal.
+    return false;
+  }
+
+  bool
+  operator!= (const vinfo &Other) const
+  {
+    return !(*this == Other);
+  }
+
+  vinfo &
+  operator= (const vinfo &other)
+  {
+    state = other.state;
+    vma = other.vma;
+    vta = other.vta;
+    vsew = other.vsew;
+    vlmul = other.vlmul;
+    all_maskop_p = other.all_maskop_p;
+    sew_lmul_ratio_only_p = other.sew_lmul_ratio_only_p;
+    avl = other.avl;
+    avl_source = other.avl_source;
+    return *this;
+  }
+
+  // Calculate the vinfo visible to a block assuming this and other are
+  // both predecessors.
+  vinfo
+  intersect (const vinfo &other) const
+  {
+    // If the new value isn't valid, ignore it.
+    if (!other.valid_p ())
+      return *this;
+
+    // If this value isn't valid, this must be the first predecessor, use it.
+    if (!valid_p ())
+      return other;
+
+    // If either is unknown, the result is unknown.
+    if (unknown_p () || other.unknown_p ())
+      return vinfo::get_unknown ();
+
+    // If we have an exact, match return this.
+    if (*this == other)
+      return *this;
+
+    // Not an exact match, but maybe the AVL and VLMAX are the same. If so,
+    // return an SEW/LMUL ratio only value.
+    if (avl_equal_p (other) && vlmax_equal_p (other))
+      {
+        vinfo merge_info = *this;
+        merge_info.sew_lmul_ratio_only_p = true;
+        return merge_info;
+      }
+
+    // otherwise the result is unknown.
+    return vinfo::get_unknown ();
+  }
+  
+  // Print debug info into rtl dump file. */
+  void
+  print () const
+  {
+    fprintf (dump_file, "{\n");
+    if (known_p ())
+      fprintf (dump_file, "  Known\n");
+    else if (unknown_p ())
+      fprintf (dump_file, "  Unknown\n");
+    else
+      fprintf (dump_file, "  Uninitialized\n");
+    
+    if (known_p () && get_avl ())
+      {
+        fprintf (dump_file, "  Avl=");
+        print_rtl_single (dump_file, get_avl ());
+        if (get_avl_source ())
+          {
+            fprintf (dump_file, "  Avl Source=");
+            print_rtl_single (dump_file, get_avl_source ());
+          }
+        else
+          fprintf (dump_file, "  Avl Source=(nil)\n");
+      }
+    else
+      fprintf (dump_file, "  Avl=(nil)\n  Avl Source=(nil)\n");
+    fprintf (dump_file, "  Vsew=%d\n", (unsigned int)vsew);
+    fprintf (dump_file, "  Vlmul=%d\n", (unsigned int)vlmul);
+    fprintf (dump_file, "  TailAgnostic=%d\n", (unsigned int)vta);
+    fprintf (dump_file, "  MaskAgnostic=%d\n", (unsigned int)vma);
+    fprintf (dump_file, "  MaskOp=%d\n", (unsigned int)all_maskop_p);
+    fprintf (dump_file, "  Store_p=%d\n", (unsigned int)store_p);
+    fprintf (dump_file, "  Scalar_move_p=%d\n", (unsigned int)scalar_move_p);
+    fprintf (dump_file, "  Sew_lmul_ratio_only_p=%d\n", (unsigned int)sew_lmul_ratio_only_p);
+    fprintf (dump_file, "}\n");
+  }
+};
+
+struct bb_vinfo
+{
+  // The vinfo that represents the net changes to the VL/VTYPE registers
+  // made by this block. Calculated in Phase 1.
+  vinfo change;
+
+  // The vinfo that represents the VL/VTYPE settings on exit from this
+  // block. Calculated in Phase 2.
+  vinfo exit;
+
+  // The vinfo that represents the VL/VTYPE settings from all predecessor
+  // blocks. Calculated in Phase 2, and used by Phase 3.
+  vinfo pred;
+
+  // Keeps track of whether the block is already in the queue.
+  bool inqueue = false;
+
+  bb_vinfo () {}
+};
+
+static std::map<unsigned int, bb_vinfo> bb_vinfo_map;
+static std::deque<basic_block> bb_queue;
+
+static rtx_insn *
+fetch_def_insn (rtx_insn *rtl, const vinfo info)
+{
+  /* We need use rtl ssa def_info to optimize which needs
+     optimization to large than or equal to 2. */
+  if (optimize < 2)
+    return NULL;
+  
+  // We didn't find a compatible value. If our AVL is a virtual register,
+  // it might be defined by a VSET(I)VLI. If it has the same VTYPE we need
+  // and the last VL/VTYPE we observed is the same, we don't need a
+  // VSETVLI here.
+  if (!info.known_p ())
+    return NULL;
+  if (!info.get_avl ())
+    return NULL;
+
+  rtx avl = info.get_avl ();
+
+  if (!REG_P (avl))
+    return NULL;
+
+  insn_info *next;
+  for (insn_info *insn = crtl->ssa->first_insn (); insn; insn = next)
+    {
+      next = insn->next_any_insn ();
+      if (insn->rtl () == rtl)
+        {
+          resource_info resource{GET_MODE (avl), REGNO (avl)};
+          def_lookup dl = crtl->ssa->find_def (resource, insn);
+          def_info *def = dl.prev_def (insn);
+
+          if (!def)
+            return NULL;
+
+          if (!is_a<set_info *> (def))
+            return NULL;
+
+          insn_info *def_insn = def->insn ();
+          rtx_insn *def_rtl = def_insn->rtl ();
+
+          if (!def_rtl)
+            return NULL;
+          if (!INSN_P (def_rtl))
+            return NULL;
+            
+          return def_rtl;
+        }
+    }
+
+  return NULL;
+}
+
+static void
+emit_vsetvl_insn (rtx op0, rtx op1, rtx op2, rtx_insn *insn)
+{
+  if (dump_file)
+    {
+      fprintf (dump_file, "insert vsetvli for insn %d\n\n", INSN_UID (insn));
+      print_rtl_single (dump_file, insn);
+    }
+
+  if (rtx_equal_p (op0, gen_rtx_REG (Pmode, X0_REGNUM)) &&
+      rtx_equal_p (op1, gen_rtx_REG (Pmode, X0_REGNUM)))
+    emit_insn_before (gen_vsetvl_zero_zero (op2), insn);
+  else if (rtx_equal_p (op0, gen_rtx_REG (Pmode, X0_REGNUM)))
+    emit_insn_before (gen_vsetvl_zero (Pmode, op1, op2), insn);
+  else
+    emit_insn_before (gen_vsetvl (Pmode, op0, op1, op2), insn);
+}
+
+static vinfo 
+compute_info_for_instr (rtx_insn *, vinfo);
+
+// Return a vinfo representing the changes made by this VSETVLI or
+// VSETIVLI instruction.
+static vinfo
+get_info_for_vsetvli (rtx_insn *insn, vinfo curr_info)
+{
+  vinfo new_info;
+  extract_insn_cached (insn);
+
+  if (recog_data.n_operands == 1)
+    {
+      gcc_assert (CONST_INT_P (recog_data.operand[0]) &&
+                  "Invalid vtype in vsetvli instruction.");
+      if (curr_info.valid_p () && !curr_info.unknown_p ())
+        {
+          new_info.set_avl (curr_info.get_avl ());
+          new_info.set_avl_source (curr_info.get_avl_source ());
+          new_info.set_vtype (INTVAL (recog_data.operand[0]));
+          /* if this X0, X0 vsetvli is redundant,
+             remove it. */
+          if (curr_info.compatible_vtype_p (new_info))
+            remove_insn (insn);
+        }
+      else
+        {
+          /* vsetvli X0, X0 means that the following instruction
+             use the same vl as before. */
+          basic_block bb = BLOCK_FOR_INSN (insn);
+          rtx_insn *next_insn;
+          bool find_vl_p = false;
+          for (next_insn = NEXT_INSN (insn); insn != NEXT_INSN (BB_END (bb));
+               next_insn = NEXT_INSN (next_insn))
+            {
+              if (use_vtype_p (next_insn))
+                {
+                  vinfo next_info = compute_info_for_instr (next_insn, curr_info);
+                  new_info.set_avl (next_info.get_avl ());
+                  new_info.set_avl_source (next_info.get_avl_source ());
+                  extract_insn_cached (insn);
+                  new_info.set_vtype (INTVAL (recog_data.operand[0]));
+                  
+                  if (recog_clobber_vl_vtype (next_insn) != MOV_CLOBBER_REG_REG &&
+                      recog_clobber_vl_vtype (next_insn) != OTHERS)
+                    new_info = vinfo::get_unknown ();
+                    
+                  find_vl_p = true;
+                  break;
+                }
+            }
+          gcc_assert (find_vl_p);
+        }
+      return new_info;
+    }
+  if (recog_data.n_operands == 2)
+    {
+      gcc_assert (CONST_INT_P (recog_data.operand[1]) &&
+                  "Invalid vtype in vsetvli instruction.");
+      new_info.set_avl (recog_data.operand[0]);
+      new_info.set_avl_source (get_avl_source (recog_data.operand[0], insn));
+      new_info.set_vtype (INTVAL (recog_data.operand[1]));
+      return new_info;
+    }
+  
+  gcc_assert (recog_data.n_operands == 3);
+  rtx vl = recog_data.operand[1];
+  rtx vtype = recog_data.operand[2];
+  gcc_assert (CONST_INT_P (vtype) && "Invalid vtype in vsetvli instruction.");
+  new_info.set_avl (vl);
+  new_info.set_avl_source (get_avl_source (vl, insn));
+  new_info.set_vtype (INTVAL (vtype));
+  return new_info;
+}
+
+static unsigned int
+analyze_vma_vta (rtx_insn *insn, vinfo curr_info)
+{
+  if (!use_vl_p (insn))
+    return 1;
+  
+  if (recog_clobber_vl_vtype (insn) != OTHERS)
+    return 1;
+    
+  if (use_vlmax_p (insn))
+    return 1;
+  unsigned int offset = get_policy_offset (insn);
+  extract_insn_cached (insn);
+  vector_policy vma =
+      riscv_vector::get_vma (INTVAL (recog_data.operand[recog_data.n_operands - offset]));
+  vector_policy vta =
+      riscv_vector::get_vta (INTVAL (recog_data.operand[recog_data.n_operands - offset]));
+  unsigned int vma_p = 0;
+  unsigned int vta_p = 0;
+  if (vma == vector_policy::agnostic)
+    vma_p = 1;
+  else if (vma == vector_policy::undisturbed)
+    vma_p = 0;
+  else
+    {
+      /* For N/A vma we remain the last vma if it valid. */
+      if (curr_info.valid_p () && !curr_info.unknown_p ())
+        vma_p = curr_info.get_vma ();
+      else
+        vma_p = 0;
+    }
+  
+  if (vta == vector_policy::agnostic)
+    vta_p = 1;
+  else if (vta == vector_policy::undisturbed)
+    vta_p = 0;
+  else
+    {
+      /* For N/A vta we remain the last vta if it valid. */
+      if (curr_info.valid_p () && !curr_info.unknown_p ())
+        vta_p = curr_info.get_vta ();
+      else
+        vta_p = 1;
+    }
+  return (vma_p << 1) | vta_p;
+}
+
+static bool
+scalar_move_insn_p (rtx_insn *insn)
+{
+  return insn && INSN_P (insn) && recog_memoized (insn) >= 0 &&
+         (get_attr_type (insn) == TYPE_VMV_S_X ||
+          get_attr_type (insn) == TYPE_VFMV_S_F);
+}
+
+static bool
+store_insn_p (rtx_insn *insn)
+{
+  return insn && INSN_P (insn) && recog_memoized (insn) >= 0 &&
+         (get_attr_type (insn) == TYPE_VSE ||
+          get_attr_type (insn) == TYPE_VSSE);
+}
+
+static bool
+can_skip_load_store_insn_p (rtx_insn *insn)
+{
+  return insn && INSN_P (insn) && recog_memoized (insn) >= 0 &&
+         (get_attr_type (insn) == TYPE_VSE ||
+          get_attr_type (insn) == TYPE_VSSE ||
+          get_attr_type (insn) == TYPE_VLE ||
+          get_attr_type (insn) == TYPE_VLSE);
+}
+
+static vinfo
+compute_info_for_instr (rtx_insn *insn, vinfo curr_info)
+{
+  vinfo info;
+
+  extract_insn_cached (insn);
+  
+  if (use_vl_p (insn))
+    {
+      if (recog_clobber_vl_vtype (insn) != OTHERS)
+        info.set_avl (gen_rtx_REG (Pmode, X0_REGNUM));
+      else if (use_vlmax_p (insn))
+        info.set_avl (gen_rtx_REG (Pmode, X0_REGNUM));
+      else
+        {
+          unsigned int offset = get_vl_offset (insn);
+          info.set_avl_source (get_avl_source (
+              recog_data.operand[recog_data.n_operands - offset], insn));
+          info.set_avl (recog_data.operand[recog_data.n_operands - offset]);
+        }
+    }
+  else
+    info.set_avl (NULL_RTX);
+
+  machine_mode mode = rvv_translate_attr_mode (insn);
+  bool st_p = store_insn_p (insn);
+  bool scalar_move_p = scalar_move_insn_p (insn);
+
+  unsigned int vma_vta = analyze_vma_vta (insn, curr_info);
+  unsigned int vta = vma_vta & 0x1;
+  unsigned int vma = (vma_vta >> 1) & 0x1;
+  info.set_vtype (rvv_classify_vlmul_field (mode),
+                  rvv_classify_vsew_field (mode),
+                  /*TailAgnostic*/ vta, /*MaskAgnostic*/ vma,
+                  rvv_mask_mode_p (mode), st_p, scalar_move_p);
+  
+  return info;
+}
+
+static bool
+can_skip_vsetvli_for_load_store_p (rtx_insn *insn, const vinfo &new_info, const vinfo &curr_info)
+{
+  gcc_assert (recog_memoized (insn) >= 0);
+  if (!can_skip_load_store_insn_p (insn))
+    return false;
+  machine_mode mode = rvv_translate_attr_mode (insn);
+  unsigned vsew = rvv_classify_vsew_field (mode);
+  gcc_assert (store_insn_p (insn) == new_info.get_store_p ());
+  return curr_info.load_store_compatible_p (vsew, new_info);
+}
+
+static bool
+need_vsetvli (rtx_insn *insn, const vinfo &require, const vinfo &curr_info)
+{
+  if (!need_vsetvli_p (insn))
+    return false;
+    
+  if (curr_info.compatible_p (require))
+    return false;
+
+  // We didn't find a compatible value. If our AVL is a virtual register,
+  // it might be defined by a VSET(I)VLI. If it has the same VTYPE we need
+  // and the last VL/VTYPE we observed is the same, we don't need a
+  // VSETVLI here.
+  if (!curr_info.unknown_p () && require.avl_reg_p () &&
+      REGNO (require.get_avl ()) >= FIRST_PSEUDO_REGISTER && 
+      !curr_info.get_sew_lmul_ratio_only_p () &&
+      curr_info.compatible_vtype_p (require))
+    {
+      rtx_insn *def_rtl = fetch_def_insn (insn, require);
+      if (def_rtl != NULL)
+        {
+          if (is_vector_config_instr (def_rtl))
+            {
+              vinfo def_info = get_info_for_vsetvli (def_rtl, curr_info);
+              if (def_info.avl_equal_p (curr_info) &&
+                  def_info.vlmax_equal_p (curr_info))
+                return false;
+            }
+        }
+    }
+
+  return true;
+}
+
+static bool
+need_vsetvli_phi (const vinfo &new_info, rtx_insn *rtl)
+{
+  /* Optimize the case as follows:
+  void foo (int8_t *base, int8_t* out, size_t vl, unsigned int m)
+  {
+    vint8mf8_t v0;
+    size_t avl;
+    if (m > 1000)
+      avl = vsetvl_e8mf8 (vl);
+    else
+      avl = vsetvl_e8mf8 (vl << 2);
+    for (int i = 0; i < m; i++)
+      {
+        v0 = vle8_v_i8mf8 (base + i * 32,avl);
+        v0 = vadd_vv_i8mf8 (v0,v0,avl);
+      }
+    *(vint8mf8_t*)out = v0;
+  } */
+
+  /* We need use rtl ssa phi to optimize which needs
+     optimization to large than or equal to 2. */
+  if (optimize < 2)
+    return true;
+
+  if (!(!new_info.unknown_p () && new_info.get_avl () &&
+        GET_CODE (new_info.get_avl ()) == REG))
+    return true;
+
+  rtx avl = new_info.get_avl ();
+
+  insn_info *next;
+  /* fetch phi_node.  */
+  for (insn_info *insn = crtl->ssa->first_insn (); insn; insn = next)
+    {
+      next = insn->next_any_insn ();
+      if (insn->rtl () == rtl)
+        {
+          bb_info *bb = insn->bb ();
+          ebb_info *ebb = bb->ebb ();
+          resource_info resource{GET_MODE (avl), REGNO (avl)};
+          insn_info *phi_insn = ebb->phi_insn ();
+          phi_info *phi;
+          def_lookup dl = crtl->ssa->find_def (resource, phi_insn);
+          def_info *set = dl.prev_def (phi_insn);
+
+          if (!set)
+            return true;
+
+          if (!is_a<phi_info *> (set))
+            return true;
+
+          // There is an existing phi.
+          phi = as_a<phi_info *> (set);
+          for (unsigned int i = 0; i < phi->num_inputs (); i++)
+            {
+              def_info *def = phi->input_value (i);
+              if (!def)
+                return true;
+              insn_info *def_insn = def->insn ();
+              rtx_insn *def_rtl = def_insn->rtl ();
+
+              if (!def_rtl)
+                return true;
+              if (!INSN_P (def_rtl))
+                return true;
+              extract_insn_cached (def_rtl);
+              if (recog_data.n_operands > 0 &&
+                  rtx_equal_p (recog_data.operand[0], avl))
+                {
+                  if (get_attr_type (def_rtl) &&
+                      get_attr_type (def_rtl) == TYPE_VSETVL)
+                    {
+                      basic_block def_bb = BLOCK_FOR_INSN (def_rtl);
+                      bb_vinfo info = bb_vinfo_map.at(def_bb->index);
+                      // If the exit from the predecessor has the VTYPE
+                      // we are looking for we might be able to avoid a
+                      // VSETVLI.
+                      if (info.exit.unknown_p () ||
+                          !info.exit.vtype_equal_p (new_info))
+                        return true;
+                      // We found a VSET(I)VLI make sure it matches the
+                      // output of the predecessor block.
+                      vinfo curr_info;
+                      vinfo avl_def_info =
+                          get_info_for_vsetvli (def_rtl, curr_info);
+                      if (!avl_def_info.vtype_equal_p (info.exit) ||
+                          !avl_def_info.avl_equal_p (info.exit))
+                        return true;
+                    }
+                  else
+                    return true;
+                }
+            }
+        }
+    }
+
+  // If all the incoming values to the PHI checked out, we don't need
+  // to insert a VSETVLI.
+  return false;
+}
+
+static bool
+compute_vl_vtype_changes (basic_block bb)
+{
+  bool vector_p = false;
+
+  bb_vinfo &info = bb_vinfo_map[bb->index];
+  info.change = info.pred;
+  rtx_insn *insn = NULL;
+  vinfo curr_info;
+
+  FOR_BB_INSNS (bb, insn)
+  {
+    // If this is an explicit VSETVLI or VSETIVLI, update our state.
+    if (is_vector_config_instr (insn))
+      {
+        vector_p = true;
+        info.change = get_info_for_vsetvli (insn, curr_info);
+        curr_info = info.change;
+        continue;
+      }
+    
+    /*  According to vector.md, each instruction pattern parallel.
+        It should have at least 2 side effects.
+        The last 2 side effects are use vl && use vtype  */
+    if (use_vtype_p (insn))
+      {
+        vector_p = true;
+
+        vinfo new_info = compute_info_for_instr (insn, curr_info);
+        curr_info = new_info;
+        if (!info.change.valid_p ())
+          info.change = new_info;
+        else
+          {
+            // If this instruction isn't compatible with the previous VL/VTYPE
+            // we need to insert a VSETVLI.
+            // If this is a unit-stride or strided load/store, we may be able
+            // to use the EMUL=(EEW/SEW)*LMUL relationship to avoid changing
+            // vtype. NOTE: We only do this if the vtype we're comparing
+            // against was created in this block. We need the first and third
+            // phase to treat the store the same way.
+            if (!can_skip_vsetvli_for_load_store_p (insn, new_info, info.change) &&
+                need_vsetvli (insn, new_info, info.change))
+              info.change = new_info;
+          }
+      }
+    // If this is something that updates VL/VTYPE that we don't know about, set
+    // the state to unknown.
+    if (update_vl_vtype_p (insn))
+      {
+        curr_info = vinfo::get_unknown ();
+        info.change = vinfo::get_unknown ();
+      }
+  }
+
+  return vector_p;
+}
+
+static void
+compute_incoming_vl_vtype (const basic_block bb)
+{
+  bb_vinfo &info = bb_vinfo_map[bb->index];
+  info.inqueue = false;
+
+  vinfo in_info;
+  if (EDGE_COUNT (bb->preds) == 0)
+    {
+      // There are no predecessors, so use the default starting status.
+      in_info.set_unknown ();
+    }
+  else
+    {
+      edge e;
+      edge_iterator ei;
+      FOR_EACH_EDGE (e, ei, bb->preds)
+      {
+        basic_block ancestor = e->src;
+        in_info = in_info.intersect (bb_vinfo_map.at(ancestor->index).exit);
+      }
+    }
+
+  // If we don't have any valid predecessor value, wait until we do.
+  if (!in_info.valid_p ())
+    return;
+  
+  // If no change, no need to rerun block
+  if (in_info == info.pred)
+    return;
+
+  info.pred = in_info;
+  if (dump_file)
+    {
+      fprintf (dump_file, "Entry state of bb %d changed to\n", bb->index);
+      info.pred.print ();
+    }
+  
+  // Note: It's tempting to cache the state changes here, but due to the
+  // compatibility checks performed a blocks output state can change based on
+  // the input state.  To cache, we'd have to add logic for finding
+  // never-compatible state changes.
+  compute_vl_vtype_changes (bb);
+  vinfo tmpstatus = info.change;
+
+  // If the new exit value matches the old exit value, we don't need to revisit
+  // any blocks.
+  if (info.exit == tmpstatus)
+    return;
+
+  info.exit = tmpstatus;
+  
+  if (dump_file)
+    {
+      fprintf (dump_file, "Exit state of bb %d changed to\n", bb->index);
+      info.exit.print ();
+    }
+  // Add the successors to the work list so we can propagate the changed exit
+  // status.
+  edge e;
+  edge_iterator ei;
+  FOR_EACH_EDGE (e, ei, bb->succs)
+  {
+    basic_block succ = e->dest;
+    if (!bb_vinfo_map[succ->index].inqueue)
+      bb_queue.push_back (succ);
+  }
+}
+
+static void
+insert_vsetvl (rtx_insn *insn, const vinfo &curr_info, const vinfo &prev_info)
+{
+  extract_insn_cached (insn);
+  rtx avl = curr_info.get_avl ();
+  rtx vtype = GEN_INT (curr_info.encode_vtype ());
+  rtx zero = gen_rtx_REG (Pmode, X0_REGNUM);
+
+  if (recog_clobber_vl_vtype (insn) == MOV_CLOBBER_REG_MEM
+    || recog_clobber_vl_vtype (insn) == MOV_CLOBBER_MEM_REG)
+    {
+      gcc_assert (
+          reload_completed &&
+          rtx_equal_p (curr_info.get_avl (), gen_rtx_REG (Pmode, X0_REGNUM)));
+      avl = recog_data.operand[2];
+      PUT_MODE (avl, Pmode);
+      emit_vsetvl_insn (avl, gen_rtx_REG (Pmode, X0_REGNUM), vtype, insn);
+      return;
+    }
+
+  // Use X0, X0 form if the AVL is the same and the SEW+LMUL gives the same
+  // VLMAX
+  if (prev_info.valid_p () && !prev_info.unknown_p () &&
+      curr_info.avl_equal_p (prev_info) && curr_info.vlmax_equal_p (prev_info))
+    {
+      emit_vsetvl_insn (zero, zero, vtype, insn);
+      return;
+    }
+
+  if (curr_info.get_avl () == NULL_RTX)
+    {
+      if (prev_info.valid_p () && !prev_info.unknown_p () &&
+          curr_info.vlmax_equal_p (prev_info))
+        {
+          emit_vsetvl_insn (zero, zero, vtype, insn);
+          return;
+        }
+      // Otherwise use an AVL of 0 to avoid depending on previous vl.
+      emit_vsetvl_insn (zero, GEN_INT (0), vtype, insn);
+      return;
+    }
+
+  if (rtx_equal_p (curr_info.get_avl (), gen_rtx_REG (Pmode, X0_REGNUM)))
+    {
+      if (reload_completed)
+        avl = gen_rtx_REG (Pmode, X0_REGNUM);
+      else
+        avl = gen_reg_rtx (Pmode);
+      emit_vsetvl_insn (avl, gen_rtx_REG (Pmode, X0_REGNUM), vtype, insn);
+      return;
+    }
+
+  emit_vsetvl_insn (zero, avl, vtype, insn);
+}
+
+static void
+cleanup_insn_op (rtx_insn *insn)
+{
+  if (!reload_completed)
+    return;
+
+  /* 1.Remove the vl operand for every rvv instruction.
+     2.Replace every reload register spilling rvv instruction. */
+  rtx pat;
+  extract_insn_cached (insn);
+  machine_mode mode = rvv_translate_attr_mode (insn);
+  if (recog_clobber_vl_vtype (insn) == MOV_CLOBBER_REG_MEM)
+    {
+      if (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL)
+        pat =
+              gen_vlm (mode, recog_data.operand[0],
+                       XEXP (recog_data.operand[1], 0), const0_rtx, const0_rtx);
+      else
+        pat = gen_vle (mode, recog_data.operand[0], const0_rtx, const0_rtx,
+                       XEXP (recog_data.operand[1], 0), const0_rtx, const0_rtx);
+
+      validate_change (insn, &PATTERN (insn), pat, false);
+    }
+  else if (recog_clobber_vl_vtype (insn) == MOV_CLOBBER_MEM_REG)
+    {
+      if (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL)
+        pat = gen_vsm (mode, XEXP (recog_data.operand[0], 0),
+                       recog_data.operand[1], const0_rtx, const0_rtx);
+      else
+        pat = gen_vse (mode, const0_rtx, XEXP (recog_data.operand[0], 0),
+                       recog_data.operand[1], const0_rtx, const0_rtx);
+
+      validate_change (insn, &PATTERN (insn), pat, false);
+    }
+  else
+    replace_op (insn, const0_rtx, REPLACE_VL);
+}
+
+static void
+emit_vsetvlis (const basic_block bb)
+{
+  vinfo curr_info;
+  rtx_insn *insn = NULL;
+
+  FOR_BB_INSNS (bb, insn)
+  {
+    // If this is an explicit VSETVLI or VSETIVLI, update our state.
+    if (is_vector_config_instr (insn))
+      {
+        curr_info = get_info_for_vsetvli (insn, curr_info);
+        continue;
+      }
+
+    if (use_vtype_p (insn))
+      {
+        vinfo new_info = compute_info_for_instr (insn, curr_info);
+        
+        if (!curr_info.valid_p ())
+          {
+            // We haven't found any vector instructions or VL/VTYPE changes
+            // yet, use the predecessor information.
+            curr_info = bb_vinfo_map[bb->index].pred;
+            gcc_assert (curr_info.valid_p () &&
+                        "Expected a valid predecessor state.");
+            if (need_vsetvli (insn, new_info, curr_info))
+              {
+                // If this is the first implicit state change, and the state change
+                // requested can be proven to produce the same register contents, we
+                // can skip emitting the actual state change and continue as if we
+                // had since we know the GPR result of the implicit state change
+                // wouldn't be used and VL/VTYPE registers are correct.  Note that
+                // we *do* need to model the state as if it changed as while the
+                // register contents are unchanged, the abstract model can change.
+                if (need_vsetvli_phi (new_info, insn))
+                  insert_vsetvl (insn, new_info, curr_info);
+                curr_info = new_info;
+              }
+          }
+        else
+          {
+            // If this instruction isn't compatible with the previous VL/VTYPE
+            // we need to insert a VSETVLI.
+            // If this is a unit-stride or strided load/store, we may be able
+            // to use the EMUL=(EEW/SEW)*LMUL relationship to avoid changing
+            // vtype. NOTE: We can't use predecessor information for the store.
+            // We must treat it the same as the first phase so that we produce
+            // the correct vl/vtype for succesor blocks.
+            if (!can_skip_vsetvli_for_load_store_p (insn, new_info,
+                                                    curr_info) &&
+                need_vsetvli (insn, new_info, curr_info))
+              {
+                insert_vsetvl (insn, new_info, curr_info);
+                curr_info = new_info;
+              }
+          }
+        cleanup_insn_op (insn);
+      }
+    // If this is something updates VL/VTYPE that we don't know about, set
+    // the state to unknown.
+    if (update_vl_vtype_p (insn))
+      curr_info = vinfo::get_unknown ();
+
+    // If we reach the end of the block and our current info doesn't match the
+    // expected info, insert a vsetvli to correct.
+    if (insn == BB_END (bb))
+      {
+        const vinfo exit_info = bb_vinfo_map.at(bb->index).exit;
+        if (curr_info.valid_p () && exit_info.valid_p () &&
+            !exit_info.unknown_p () && curr_info != exit_info)
+          {
+            insert_vsetvl (insn, exit_info, curr_info);
+            curr_info = exit_info;
+          }
+      }
+  }
+}
+
+static void
+dolocalprepass (const basic_block bb)
+{
+  rtx_insn *insn = NULL;
+  vinfo curr_info = vinfo::get_unknown ();
+  FOR_BB_INSNS (bb, insn)
+  {
+    // If this is an explicit VSETVLI or VSETIVLI, update our state.
+    if (is_vector_config_instr (insn))
+      {
+        curr_info = get_info_for_vsetvli (insn, curr_info);
+        continue;
+      }
+
+    if (scalar_move_insn_p (insn))
+      {
+        gcc_assert (use_vtype_p (insn) && use_vl_p (insn));
+        const vinfo new_info = compute_info_for_instr (insn, curr_info);
+
+        // For vmv.s.x and vfmv.s.f, there are only two behaviors, VL = 0 and
+        // VL > 0. We can discard the user requested AVL and just use the last
+        // one if we can prove it equally zero.  This removes a vsetvli entirely
+        // if the types match or allows use of cheaper avl preserving variant
+        // if VLMAX doesn't change.  If VLMAX might change, we couldn't use
+        // the 'vsetvli x0, x0, vtype" variant, so we avoid the transform to
+        // prevent extending live range of an avl register operand.
+        // TODO: We can probably relax this for immediates.
+        if (((curr_info.has_nonzero_avl () && new_info.has_nonzero_avl ()) ||
+             (curr_info.has_zero_avl () && new_info.has_zero_avl ())) &&
+            new_info.vlmax_equal_p (curr_info))
+          {
+            replace_op (insn, curr_info.get_avl (), REPLACE_VL);
+            curr_info = compute_info_for_instr (insn, curr_info);
+            continue;
+          }
+      }
+
+    if (use_vtype_p (insn))
+      {
+        if (use_vl_p (insn))
+          {
+            const auto require = compute_info_for_instr (insn, curr_info);
+            // If the AVL is the result of a previous vsetvli which has the
+            // same AVL and VLMAX as our current state, we can reuse the AVL
+            // from the current state for the new one.  This allows us to
+            // generate 'vsetvli x0, x0, vtype" or possible skip the transition
+            // entirely.
+            if (!curr_info.unknown_p () && require.get_avl () &&
+                REG_P (require.get_avl ()) &&
+                REGNO (require.get_avl ()) >= FIRST_PSEUDO_REGISTER)
+              {
+                rtx_insn *def_rtl = fetch_def_insn (insn, require);
+                
+                if (def_rtl != NULL)
+                  {
+                    if (is_vector_config_instr (def_rtl))
+                      {
+                        vinfo def_info = get_info_for_vsetvli (def_rtl, curr_info);
+                        if (def_info.avl_equal_p (curr_info) &&
+                            def_info.vlmax_equal_p (curr_info))
+                          {
+                            replace_op (insn, curr_info.get_avl (), REPLACE_VL);
+                            curr_info = compute_info_for_instr (insn, curr_info);
+                            continue;
+                          }
+                      }
+                  }
+              }
+
+            // If AVL is defined by a vsetvli with the same vtype, we can
+            // replace the AVL operand with the AVL of the defining vsetvli.
+            // We avoid general register AVLs to avoid extending live ranges
+            // without being sure we can kill the original source reg entirely.
+            // TODO: We can ignore policy bits here, we only need VL to be the
+            // same.
+            if (!curr_info.unknown_p () && require.get_avl () &&
+                REG_P (require.get_avl ()) &&
+                REGNO (require.get_avl ()) >= FIRST_PSEUDO_REGISTER)
+              {
+                rtx_insn *def_rtl = fetch_def_insn (insn, require);
+                if (def_rtl != NULL)
+                  {
+                    if (is_vector_config_instr (def_rtl))
+                      {
+                        vinfo def_info = get_info_for_vsetvli (def_rtl, curr_info);
+                        if (def_info.vtype_equal_p (require) &&
+                            (def_info.avl_const_p () ||
+                            (def_info.avl_reg_p () &&
+                            rtx_equal_p (def_info.get_avl (), gen_rtx_REG (Pmode, X0_REGNUM)))))
+                          {
+                            replace_op (insn, def_info.get_avl (), REPLACE_VL);
+                            curr_info = compute_info_for_instr (insn, curr_info);
+                            continue;
+                          }
+                      }
+                  }
+              }
+          }
+        curr_info = compute_info_for_instr (insn, curr_info);
+        continue;
+      }
+
+    // If this is something that updates VL/VTYPE that we don't know about,
+    // set the state to unknown.
+    if (update_vl_vtype_p (insn))
+      curr_info = vinfo::get_unknown ();
+  }
+}
+
+static void
+dolocalpostpass (const basic_block bb)
+{
+  rtx_insn *prev_insn = nullptr;
+  rtx_insn *insn = nullptr;
+  bool used_vl = false, used_vtype = false;
+  std::vector<rtx_insn *> to_delete;
+  FOR_BB_INSNS (bb, insn)
+  {
+    // Note: Must be *before* vsetvli handling to account for config cases
+    // which only change some subfields.
+    if (update_vl_vtype_p (insn) || use_vl_p (insn))
+      used_vl = true;
+    if (update_vl_vtype_p (insn) || use_vtype_p (insn))
+      used_vtype = true;
+
+    if (!is_vector_config_instr (insn))
+      continue;
+    
+    extract_insn_cached (insn);
+    if (prev_insn)
+      {
+        if (!used_vl && !used_vtype)
+          {
+            to_delete.push_back (prev_insn);
+            // fallthrough
+          }
+        else if (!used_vtype && is_vl_preserving_config (insn))
+          {
+            // Note: `vsetvli x0, x0, vtype' is the canonical instruction
+            // for this case.  If you find yourself wanting to add other forms
+            // to this "unused VTYPE" case, we're probably missing a
+            // canonicalization earlier.
+            // Note: We don't need to explicitly check vtype compatibility
+            // here because this form is only legal (per ISA) when not
+            // changing VL.
+            rtx new_vtype = recog_data.operand[recog_data.n_operands - 1];
+            replace_op (prev_insn, new_vtype, REPLACE_VTYPE);
+            to_delete.push_back (insn);
+            // Leave prev_insn unchanged
+            continue;
+          }
+      }
+    prev_insn = insn;
+    used_vl = false;
+    used_vtype = false;
+    
+    rtx vdef = recog_data.operand[0];
+    if (!rtx_equal_p (vdef, gen_rtx_REG (Pmode, X0_REGNUM)) &&
+        !(REGNO (vdef) >= FIRST_PSEUDO_REGISTER &&
+          (find_reg_note (insn, REG_UNUSED, vdef) ||
+           find_reg_note (insn, REG_DEAD, vdef))))
+      used_vl = true;
+  }
+
+  for (auto *to_remove : to_delete)
+    remove_insn (to_remove);
+}
+
+/// Return true if the VL value configured must be equal to the requested one.
+static bool
+has_fixed_result (const vinfo &info)
+{
+  if (!info.avl_const_p ())
+    // VLMAX is always the same value.
+    // TODO: Could extend to other registers by looking at the associated
+    // vreg def placement.
+    return rtx_equal_p (info.get_avl (), gen_rtx_REG (Pmode, X0_REGNUM));
+
+  if (VLMUL_FIELD_000 != info.get_vlmul ())
+    // TODO: Generalize the code below to account for LMUL
+    return false;
+  
+  if (!BYTES_PER_RISCV_VECTOR.is_constant ())
+    return false;
+    
+  unsigned int avl = INTVAL (info.get_avl ());
+  unsigned int vsew = info.get_vsew ();
+  machine_mode inner = vsew_to_int_mode (vsew);
+  unsigned int sew = GET_MODE_BITSIZE (as_a<scalar_mode> (inner));
+  unsigned avl_in_bits = avl * sew;
+  machine_mode mode = riscv_vector::vector_builtin_mode (
+            as_a<scalar_mode> (inner), info.get_vlmul ());
+  return GET_MODE_BITSIZE (mode).to_constant () >= avl_in_bits;
+}
+
+/// Perform simple partial redundancy elimination of the VSETVLI instructions
+/// we're about to insert by looking for cases where we can PRE from the
+/// beginning of one block to the end of one of its predecessors.  Specifically,
+/// this is geared to catch the common case of a fixed length vsetvl in a single
+/// block loop when it could execute once in the preheader instead.
+static void
+dopre (const basic_block bb)
+{
+  if (!bb_vinfo_map[bb->index].pred.unknown_p ())
+    return;
+
+  basic_block unavailable_pred = nullptr;
+  vinfo available_info;
+  
+  edge e;
+  edge_iterator ei;
+  FOR_EACH_EDGE (e, ei, bb->preds)
+  {
+    basic_block predecessor = e->src;
+    const vinfo &pred_info = bb_vinfo_map[predecessor->index].exit;
+    if (pred_info.unknown_p ())
+      {
+        if (unavailable_pred)
+          return;
+        unavailable_pred = predecessor;
+      }
+    else if (!available_info.valid_p ())
+      available_info = pred_info;
+    else if (available_info != pred_info)
+      return;
+  }
+      
+  // unreachable, single pred, or full redundancy.  Note that FRE
+  // is handled by phase 3.
+  if (!unavailable_pred || !available_info.valid_p ())
+    return;
+
+  // critical edge - TODO: consider splitting?
+  if (EDGE_COUNT (unavailable_pred->succs) != 1)
+    return;
+
+  // If VL can be less than AVL, then we can't reduce the frequency of exec.
+  if (!has_fixed_result (available_info))
+    return;
+
+  // Does it actually let us remove an implicit transition in MBB?
+  bool found = false;
+  rtx_insn *insn;
+  vinfo curr_info;
+  FOR_BB_INSNS (bb, insn)
+  {
+    if (is_vector_config_instr (insn))
+      return;
+    
+    if (use_vtype_p (insn))
+      {
+        if (available_info != compute_info_for_instr (insn, curr_info))
+            return;
+          found = true;
+          break;
+      }
+  }
+      
+  if (!found)
+    return;
+
+  // Finally, update both data flow state and insert the actual vsetvli.
+  // Doing both keeps the code in sync with the dataflow results, which
+  // is critical for correctness of phase 3.
+  auto old_info = bb_vinfo_map[unavailable_pred->index].exit;
+  if (dump_file)
+    {
+      fprintf (dump_file, "PRE VSETVLI from bb %d changed to bb %d\n", bb->index, unavailable_pred->index);
+      available_info.print ();
+    }
+  bb_vinfo_map[unavailable_pred->index].exit = available_info;
+  bb_vinfo_map[bb->index].pred = available_info;
+
+  // Note there's an implicit assumption here that terminators never use
+  // or modify VL or VTYPE.  Also, fallthrough will return end().
+  auto insert_pt = BB_END (unavailable_pred);
+  insert_vsetvl (insert_pt, available_info, old_info);
+}
+
+static unsigned int
+rest_of_handle_insert_vsetvl (function *fn)
+{
+  basic_block bb;
+
+  if (n_basic_blocks_for_fn (fn) <= 0)
+    return 0;
+
+  gcc_assert (bb_vinfo_map.empty () && "Expect empty block infos.");
+
+  if (optimize >= 2)
+    {
+      // Initialization.
+      calculate_dominance_info (CDI_DOMINATORS);
+      df_analyze ();
+      crtl->ssa = new rtl_ssa::function_info (cfun);
+    }
+
+  if (dump_file)
+    fprintf (dump_file, "\nEntering InsertVSETVLI for %s\n\n",
+             current_function_name ());
+
+  /* Initialize Basic Block Map */
+  FOR_ALL_BB_FN (bb, fn)
+  {
+    bb_vinfo bb_init;
+    bb_vinfo_map.insert (std::pair<uint8_t, bb_vinfo> (bb->index, bb_init));
+  }
+
+  // Scan the block locally for cases where we can mutate the operands
+  // of the instructions to reduce state transitions.  Critically, this
+  // must be done before we start propagating data flow states as these
+  // transforms are allowed to change the contents of VTYPE and VL so
+  // long as the semantics of the program stays the same.
+  FOR_ALL_BB_FN (bb, fn)
+    dolocalprepass (bb);
+
+  bool vector_p = false;
+
+  if (dump_file)
+    fprintf (
+        dump_file,
+        "Phase 1 determine how VL/VTYPE are affected by the each block:\n");
+
+  // Phase 1 - determine how VL/VTYPE are affected by the each block.
+  FOR_ALL_BB_FN (bb, fn)
+  {
+    vector_p |= compute_vl_vtype_changes (bb);
+    bb_vinfo &info = bb_vinfo_map[bb->index];
+    info.exit = info.change;
+    if (dump_file)
+      {
+        fprintf (dump_file, "Initial exit state of bb %d\n", bb->index);
+        info.exit.print ();
+      }
+  }
+
+  if (!vector_p)
+    {
+      bb_vinfo_map.clear ();
+      bb_queue.clear ();
+      if (optimize >= 2)
+        {
+          // Finalization.
+          free_dominance_info (CDI_DOMINATORS);
+          if (crtl->ssa->perform_pending_updates ())
+            cleanup_cfg (0);
+    
+          delete crtl->ssa;
+          crtl->ssa = nullptr;
+        }
+      return 0;
+    }
+
+  if (dump_file)
+    fprintf (dump_file,
+             "Phase 2 determine the exit VL/VTYPE from each block:\n");
+  // Phase 2 - determine the exit VL/VTYPE from each block. We add all
+  // blocks to the list here, but will also add any that need to be
+  // revisited during Phase 2 processing.
+  FOR_ALL_BB_FN (bb, fn)
+  {
+    bb_queue.push_back (bb);
+    bb_vinfo_map[bb->index].inqueue = true;
+  }
+  while (!bb_queue.empty ())
+    {
+      bb = bb_queue.front ();
+      bb_queue.pop_front ();
+      compute_incoming_vl_vtype (bb);
+    }
+   
+  // Perform partial redundancy elimination of vsetvli transitions.
+  FOR_ALL_BB_FN (bb, fn)
+    dopre (bb);
+    
+  if (dump_file)
+    fprintf (dump_file,
+             "Phase 3 add any vsetvli instructions needed in the block:\n");
+  // Phase 3 - add any vsetvli instructions needed in the block. Use the
+  // Phase 2 information to avoid adding vsetvlis before the first vector
+  // instruction in the block if the VL/VTYPE is satisfied by its
+  // predecessors.
+  FOR_ALL_BB_FN (bb, fn) 
+    emit_vsetvlis (bb); 
+  
+  // Now that all vsetvlis are explicit, go through and do block local
+  // DSE and peephole based demanded fields based transforms.  Note that
+  // this *must* be done outside the main dataflow so long as we allow
+  // any cross block analysis within the dataflow.  We can't have both
+  // demanded fields based mutation and non-local analysis in the
+  // dataflow at the same time without introducing inconsistencies.
+  FOR_ALL_BB_FN (bb, fn)
+    dolocalpostpass(bb);
+
+  // Once we're fully done rewriting all the instructions, do a final pass
+  // through to check for VSETVLIs which write to an unused destination.
+  // For the non X0, X0 variant, we can replace the destination register
+  // with X0 to reduce register pressure.  This is really a generic
+  // optimization which can be applied to any dead def (TODO: generalize).
+  if (!reload_completed)
+    {
+      FOR_ALL_BB_FN (bb, fn)
+      {
+        rtx_insn *insn = NULL;
+        FOR_BB_INSNS (bb, insn)
+        {
+          if (is_vector_config_instr (insn))
+            {
+              extract_insn_cached (insn);
+              if (recog_data.n_operands == 3 &&
+                  !rtx_equal_p (recog_data.operand[0],
+                                gen_rtx_REG (Pmode, X0_REGNUM)) &&
+                  !rtx_equal_p (recog_data.operand[1],
+                                gen_rtx_REG (Pmode, X0_REGNUM)) &&
+                  (find_reg_note (insn, REG_UNUSED, recog_data.operand[0]) ||
+                   find_reg_note (insn, REG_DEAD, recog_data.operand[0])))
+                {
+                  rtx pat = gen_vsetvl_zero (Pmode, recog_data.operand[1],
+                                             recog_data.operand[2]);
+                  validate_change (insn, &PATTERN (insn), pat, false);
+                }
+            }
+        }
+      }
+    }
+
+  bb_vinfo_map.clear ();
+  bb_queue.clear ();
+
+  if (optimize >= 2)
+    {
+      // Finalization.
+      free_dominance_info (CDI_DOMINATORS);
+      if (crtl->ssa->perform_pending_updates ())
+        cleanup_cfg (0);
+
+      delete crtl->ssa;
+      crtl->ssa = nullptr;
+    }
+
+  return 0;
+}
+
+const pass_data pass_data_insert_vsetvl = {
+    RTL_PASS,         /* type */
+    "insert_vsetvl", /* name */
+    OPTGROUP_NONE,    /* optinfo_flags */
+    TV_NONE,          /* tv_id */
+    0,                /* properties_required */
+    0,                /* properties_provided */
+    0,                /* properties_destroyed */
+    0,                /* todo_flags_start */
+    0,                /* todo_flags_finish */
+};
+
+class pass_insert_vsetvl : public rtl_opt_pass
+{
+public:
+  pass_insert_vsetvl (gcc::context *ctxt)
+      : rtl_opt_pass (pass_data_insert_vsetvl, ctxt)
+  {
+  }
+
+  /* opt_pass methods: */
+  virtual bool
+  gate (function *)
+  {
+    return TARGET_VECTOR;
+  }
+  virtual unsigned int
+  execute (function *fn)
+  {
+    return rest_of_handle_insert_vsetvl (fn);
+  }
+
+}; // class pass_insert_vsetvl
+
+rtl_opt_pass *
+make_pass_insert_vsetvl (gcc::context *ctxt)
+{
+  return new pass_insert_vsetvl (ctxt);
+}
+
+const pass_data pass_data_insert_vsetvl2 = {
+    RTL_PASS,         /* type */
+    "insert_vsetvl2", /* name */
+    OPTGROUP_NONE,    /* optinfo_flags */
+    TV_NONE,          /* tv_id */
+    0,                /* properties_required */
+    0,                /* properties_provided */
+    0,                /* properties_destroyed */
+    0,                /* todo_flags_start */
+    0,                /* todo_flags_finish */
+};
+
+class pass_insert_vsetvl2 : public rtl_opt_pass
+{
+public:
+  pass_insert_vsetvl2 (gcc::context *ctxt)
+      : rtl_opt_pass (pass_data_insert_vsetvl2, ctxt)
+  {
+  }
+
+  /* opt_pass methods: */
+  virtual bool
+  gate (function *)
+  {
+    return TARGET_VECTOR;
+  }
+  virtual unsigned int
+  execute (function *fn)
+  {
+    return rest_of_handle_insert_vsetvl (fn);
+  }
+
+}; // class pass_insert_vsetvl2
+
+rtl_opt_pass *
+make_pass_insert_vsetvl2 (gcc::context *ctxt)
+{
+  return new pass_insert_vsetvl2 (ctxt);
+}
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-opts.h b/gcc/config/riscv/riscv-opts.h
index d99b8dcbaf1..1c42f6297f9 100644
--- a/gcc/config/riscv/riscv-opts.h
+++ b/gcc/config/riscv/riscv-opts.h
@@ -81,6 +81,18 @@ enum riscv_vector_bits_enum
   RVV_4096 = 4096
 };
 
+enum vsew_field_enum
+{
+  VSEW_FIELD_000, /* SEW = 8 */
+  VSEW_FIELD_001, /* SEW = 16 */
+  VSEW_FIELD_010, /* SEW = 32 */
+  VSEW_FIELD_011, /* SEW = 64 */
+  VSEW_FIELD_100, /* SEW = 128 */
+  VSEW_FIELD_101, /* SEW = 256 */
+  VSEW_FIELD_110, /* SEW = 512 */
+  VSEW_FIELD_111  /* SEW = 1024 */
+};
+
 enum vlmul_field_enum
 {
   VLMUL_FIELD_000, /* LMUL = 1 */
diff --git a/gcc/config/riscv/riscv-passes.def b/gcc/config/riscv/riscv-passes.def
index 23ef8ac6114..282a0402485 100644
--- a/gcc/config/riscv/riscv-passes.def
+++ b/gcc/config/riscv/riscv-passes.def
@@ -18,3 +18,5 @@
    <http://www.gnu.org/licenses/>.  */
 
 INSERT_PASS_AFTER (pass_rtl_store_motion, 1, pass_shorten_memrefs);
+INSERT_PASS_AFTER (pass_split_all_insns, 1, pass_insert_vsetvl);
+INSERT_PASS_BEFORE (pass_sched2, 1, pass_insert_vsetvl2);
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index cae2974b54f..9a7e120854a 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -96,6 +96,8 @@ extern std::string riscv_arch_str (bool version_p = true);
 extern bool riscv_hard_regno_rename_ok (unsigned, unsigned);
 
 rtl_opt_pass * make_pass_shorten_memrefs (gcc::context *ctxt);
+rtl_opt_pass * make_pass_insert_vsetvl (gcc::context *ctxt);
+rtl_opt_pass * make_pass_insert_vsetvl2 (gcc::context *ctxt);
 
 /* Information about one CPU we know about.  */
 struct riscv_cpu_info {
@@ -112,15 +114,32 @@ struct riscv_cpu_info {
 extern const riscv_cpu_info *riscv_find_cpu (const char *);
 
 /* Routines implemented in riscv-vector.cc. */
+extern bool rvv_mask_mode_p (machine_mode);
 extern bool rvv_mode_p (machine_mode);
 extern bool rvv_legitimate_poly_int_p (rtx);
 extern unsigned int rvv_offset_temporaries (bool, poly_int64);
+extern enum vsew_field_enum rvv_classify_vsew_field (machine_mode);
 extern enum vlmul_field_enum rvv_classify_vlmul_field (machine_mode);
 extern unsigned int rvv_parse_vsew_field (unsigned int);
 extern unsigned int rvv_parse_vlmul_field (unsigned int);
 extern bool rvv_parse_vta_field (unsigned int);
 extern bool rvv_parse_vma_field (unsigned int);
 extern int rvv_regsize (machine_mode);
+extern rtx rvv_gen_policy (unsigned int rvv_policy = 0);
+extern opt_machine_mode rvv_get_mask_mode (machine_mode);
+extern machine_mode rvv_translate_attr_mode (rtx_insn *);
+extern void
+emit_op5 (
+  unsigned int unspec,
+  machine_mode Vmode, machine_mode VSImode, machine_mode VMSImode,
+  machine_mode VSUBmode,
+  rtx *operands,
+  rtx (*gen_vx) (rtx, rtx, rtx, rtx, rtx),
+  rtx (*gen_vx_32bit) (rtx, rtx, rtx, rtx, rtx),
+  rtx (*gen_vv) (rtx, rtx, rtx, rtx, rtx),
+  bool (*imm_p) (rtx),
+  int i, bool reverse
+);
 
 /* We classify builtin types into two classes:
    1. General builtin class which is using the
diff --git a/gcc/config/riscv/riscv-vector-builtins-iterators.def b/gcc/config/riscv/riscv-vector-builtins-iterators.def
index cc968f5534f..77a391c7630 100644
--- a/gcc/config/riscv/riscv-vector-builtins-iterators.def
+++ b/gcc/config/riscv/riscv-vector-builtins-iterators.def
@@ -7,6 +7,38 @@
 #define DEF_RISCV_ARG_MODE_ATTR(A, B, C, D, E)
 #endif
 
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V, 31)
+DEF_RISCV_ARG_MODE_ATTR(V, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 6, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 7, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 8, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 9, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 10, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 11, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 12, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 13, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 15, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 16, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 17, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 18, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 19, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 20, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 21, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V, 22, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 23, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 24, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 25, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 26, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 27, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 28, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 29, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(V, 30, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
 DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VI, 22)
 DEF_RISCV_ARG_MODE_ATTR(VI, 0, VNx2QI, VNx2QI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VI, 1, VNx4QI, VNx4QI, TARGET_ANY)
@@ -30,6 +62,210 @@ DEF_RISCV_ARG_MODE_ATTR(VI, 18, VNx2DI, VNx2DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VI, 19, VNx4DI, VNx4DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VI, 20, VNx8DI, VNx8DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VI, 21, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VF, 9)
+DEF_RISCV_ARG_MODE_ATTR(VF, 0, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 1, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 2, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 3, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 4, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 5, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 6, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 7, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VF, 8, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VB, 7)
+DEF_RISCV_ARG_MODE_ATTR(VB, 0, VNx2BI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VB, 1, VNx4BI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VB, 2, VNx8BI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VB, 3, VNx16BI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VB, 4, VNx32BI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VB, 5, VNx64BI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VB, 6, VNx128BI, VNx128BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VFULL, 24)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 0, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 1, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 2, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 3, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 4, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 5, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 6, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 7, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 8, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 9, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 10, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 11, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 12, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 13, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 14, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 15, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 16, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 17, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 18, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 19, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 20, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 21, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 22, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VFULL, 23, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VPARTIAL, 7)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 3, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 4, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 5, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VPARTIAL, 6, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(V64BITI, 4)
+DEF_RISCV_ARG_MODE_ATTR(V64BITI, 0, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V64BITI, 1, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V64BITI, 2, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(V64BITI, 3, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VM, 69)
+DEF_RISCV_ARG_MODE_ATTR(VM, 0, VNx2BI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 1, VNx4BI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 2, VNx8BI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 3, VNx16BI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 4, VNx32BI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 5, VNx64BI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 6, VNx128BI, VNx128BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 7, VNx2QI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 8, VNx4QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 9, VNx8QI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 10, VNx16QI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 11, VNx32QI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 12, VNx64QI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 13, VNx128QI, VNx128BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 14, VNx2HI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 15, VNx4HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 16, VNx8HI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 17, VNx16HI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 18, VNx32HI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 19, VNx64HI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 20, VNx2SI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 21, VNx4SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 22, VNx8SI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 23, VNx16SI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 24, VNx32SI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 25, VNx2DI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 26, VNx4DI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 27, VNx8DI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 28, VNx16DI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 29, VNx2SF, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 30, VNx4SF, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 31, VNx8SF, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 32, VNx16SF, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 33, VNx32SF, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 34, VNx2DF, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 35, VNx4DF, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 36, VNx8DF, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 37, VNx16DF, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 38, VNx2QI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 39, VNx4QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 40, VNx8QI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 41, VNx16QI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 42, VNx32QI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 43, VNx64QI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 44, VNx128QI, VNx128BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 45, VNx2HI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 46, VNx4HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 47, VNx8HI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 48, VNx16HI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 49, VNx32HI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 50, VNx64HI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 51, VNx2SI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 52, VNx4SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 53, VNx8SI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 54, VNx16SI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 55, VNx32SI, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 56, VNx2DI, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 57, VNx4DI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 58, VNx8DI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 59, VNx16DI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 60, VNx2SF, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 61, VNx4SF, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 62, VNx8SF, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 63, VNx16SF, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 64, VNx32SF, VNx32BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 65, VNx2DF, VNx2BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 66, VNx4DF, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 67, VNx8DF, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VM, 68, VNx16DF, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VSUB, 31)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 0, VNx2QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 1, VNx4QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 2, VNx8QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 3, VNx16QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 4, VNx32QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 5, VNx64QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 6, VNx128QI, QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 7, VNx2HI, HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 8, VNx4HI, HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 9, VNx8HI, HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 10, VNx16HI, HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 11, VNx32HI, HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 12, VNx64HI, HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 13, VNx2SI, SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 14, VNx4SI, SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 15, VNx8SI, SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 16, VNx16SI, SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 17, VNx32SI, SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 18, VNx2DI, DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 19, VNx4DI, DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 20, VNx8DI, DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 21, VNx16DI, DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 22, VNx2SF, SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 23, VNx4SF, SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 24, VNx8SF, SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 25, VNx16SF, SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 26, VNx32SF, SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 27, VNx2DF, DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 28, VNx4DF, DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 29, VNx8DF, DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSUB, 30, VNx16DF, DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VDI_TO_VSI, 22)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 0, VNx2QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 1, VNx4QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 2, VNx8QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 3, VNx16QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 4, VNx32QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 5, VNx64QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 6, VNx128QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 7, VNx2HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 8, VNx4HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 9, VNx8HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 10, VNx16HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 11, VNx32HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 12, VNx64HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 13, VNx2SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 14, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 15, VNx8SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 16, VNx16SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 17, VNx32SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 18, VNx2DI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 19, VNx4DI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 20, VNx8DI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI, 21, VNx16DI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VDI_TO_VSI_VM, 22)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 0, VNx2QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 1, VNx4QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 2, VNx8QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 3, VNx16QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 4, VNx32QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 5, VNx64QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 6, VNx128QI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 7, VNx2HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 8, VNx4HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 9, VNx8HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 10, VNx16HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 11, VNx32HI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 12, VNx64HI, VNx64BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 13, VNx2SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 14, VNx4SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 15, VNx8SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 16, VNx16SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 17, VNx32SI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 18, VNx2DI, VNx4BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 19, VNx4DI, VNx8BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 20, VNx8DI, VNx16BI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VDI_TO_VSI_VM, 21, VNx16DI, VNx32BI, TARGET_ANY)
 
 #undef DEF_RISCV_ARG_MODE_ATTR_VARIABLE
 #undef DEF_RISCV_ARG_MODE_ATTR
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index a9c8b290104..426490945dd 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -66,6 +66,7 @@
 #include "tree-ssa-loop-niter.h"
 #include "rtx-vector-builder.h"
 #include "riscv-vector.h"
+#include "riscv-vector-builtins.h"
 /* This file should be included last.  */
 #include "target-def.h"
 
@@ -158,6 +159,38 @@ rvv_offset_temporaries (bool add_p, poly_int64 offset)
   return count + rvv_add_offset_1_temporaries (constant);
 }
 
+/* Return the vsew field for a specific machine mode. */
+
+enum vsew_field_enum
+rvv_classify_vsew_field (machine_mode mode)
+{
+  switch (GET_MODE_INNER (mode))
+    {
+    case E_QImode:
+      return VSEW_FIELD_000;
+
+    case E_HImode:
+      return VSEW_FIELD_001;
+
+    case E_SImode:
+    case E_SFmode:
+      return VSEW_FIELD_010;
+
+    case E_DImode:
+    case E_DFmode:
+      return VSEW_FIELD_011;
+
+    case E_TImode:
+      return VSEW_FIELD_100;
+
+    default:
+      break;
+    }
+
+  /* we don't care about VSEW for Mask */
+  return VSEW_FIELD_000;
+}
+
 /* Return the vlmul field for a specific machine mode. */
 
 enum vlmul_field_enum
@@ -271,4 +304,339 @@ rvv_get_mask_mode (machine_mode mode)
       && rvv_mask_mode_p (mask_mode))
       return mask_mode;
   return default_get_mask_mode (mode);
+}
+
+/* Generate policy bitmap for a specific rvv_policy. */
+rtx
+rvv_gen_policy (unsigned int)
+{
+  return riscv_vector::gen_any_policy ();
+}
+
+/* Return machine mode for an insn type. */
+machine_mode
+rvv_translate_attr_mode (rtx_insn *insn)
+{
+  gcc_assert (recog_memoized (insn) >= 0);
+
+  switch (get_attr_mode (insn))
+    {
+#define TRANSLATE_VECTOR_MODE(MODE)                                            \
+  case MODE_VNX##MODE:                                                         \
+    return VNx##MODE##mode;
+      TRANSLATE_VECTOR_MODE (8QI)
+      TRANSLATE_VECTOR_MODE (4HI)
+      TRANSLATE_VECTOR_MODE (2SI)
+      TRANSLATE_VECTOR_MODE (2SF)
+      TRANSLATE_VECTOR_MODE (8BI)
+      TRANSLATE_VECTOR_MODE (4QI)
+      TRANSLATE_VECTOR_MODE (2HI)
+      TRANSLATE_VECTOR_MODE (4BI)
+      TRANSLATE_VECTOR_MODE (2QI)
+      TRANSLATE_VECTOR_MODE (2BI)
+      TRANSLATE_VECTOR_MODE (16QI)
+      TRANSLATE_VECTOR_MODE (8HI)
+      TRANSLATE_VECTOR_MODE (4SI)
+      TRANSLATE_VECTOR_MODE (2DI)
+      TRANSLATE_VECTOR_MODE (4SF)
+      TRANSLATE_VECTOR_MODE (2DF)
+      TRANSLATE_VECTOR_MODE (16BI)
+      TRANSLATE_VECTOR_MODE (32QI)
+      TRANSLATE_VECTOR_MODE (16HI)
+      TRANSLATE_VECTOR_MODE (8SI)
+      TRANSLATE_VECTOR_MODE (4DI)
+      TRANSLATE_VECTOR_MODE (8SF)
+      TRANSLATE_VECTOR_MODE (4DF)
+      TRANSLATE_VECTOR_MODE (32BI)
+      TRANSLATE_VECTOR_MODE (64QI)
+      TRANSLATE_VECTOR_MODE (32HI)
+      TRANSLATE_VECTOR_MODE (16SI)
+      TRANSLATE_VECTOR_MODE (8DI)
+      TRANSLATE_VECTOR_MODE (16SF)
+      TRANSLATE_VECTOR_MODE (8DF)
+      TRANSLATE_VECTOR_MODE (64BI)
+      TRANSLATE_VECTOR_MODE (128QI)
+      TRANSLATE_VECTOR_MODE (64HI)
+      TRANSLATE_VECTOR_MODE (32SI)
+      TRANSLATE_VECTOR_MODE (16DI)
+      TRANSLATE_VECTOR_MODE (32SF)
+      TRANSLATE_VECTOR_MODE (16DF)
+      TRANSLATE_VECTOR_MODE (128BI)
+
+    default:
+      break;
+    }
+
+  return VOIDmode;
+}
+
+/* Return the vtype field for a specific machine mode. */
+static unsigned int
+classify_vtype_field (machine_mode mode)
+{
+  unsigned int vlmul = rvv_classify_vlmul_field (mode);
+  unsigned int vsew = rvv_classify_vsew_field (mode);
+  unsigned int vtype = (vsew << 3) | (vlmul & 0x7) | 0x40;
+  return vtype;
+}
+
+/* lmul = real_lmul * 8
+   guarantee integer
+   e.g.
+     1  => 1/8
+     2  => 1/4
+     4  => 1/2
+     8  => 1
+     16 => 2
+     32 => 4
+     64 => 8
+ */
+static unsigned int
+get_lmulx8 (machine_mode mode)
+{
+  unsigned int vlmul = rvv_classify_vlmul_field (mode);
+  switch (vlmul)
+    {
+      case VLMUL_FIELD_000:
+        return 8;
+      case VLMUL_FIELD_001:
+        return 16;
+      case VLMUL_FIELD_010:
+        return 32;
+      case VLMUL_FIELD_011:
+        return 64;
+      case VLMUL_FIELD_101:
+        return 1;
+      case VLMUL_FIELD_110:
+        return 2;
+      case VLMUL_FIELD_111:
+        return 4;
+      default:
+        gcc_unreachable ();
+    }
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+static rtx
+force_reg_for_over_uimm (rtx vl)
+{
+  if (CONST_SCALAR_INT_P (vl) && INTVAL (vl) >= 32)
+    {
+      return force_reg (Pmode, vl);
+    }
+
+  return vl;
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+static rtx
+gen_vlx2 (rtx avl, machine_mode Vmode, machine_mode VSImode)
+{
+  if (rtx_equal_p (avl, gen_rtx_REG (Pmode, X0_REGNUM)))
+    {
+      return avl;
+    }
+  rtx i32vl = NULL_RTX;
+  if (CONST_SCALAR_INT_P (avl))
+    {
+      unsigned int vlen_max;
+      unsigned int vlen_min;
+      if (riscv_vector_chunks.is_constant ())
+        {
+          vlen_max = riscv_vector_chunks.to_constant () * 64;
+          vlen_min = vlen_max;
+        }
+      else
+        {
+          /* TODO: vlen_max will be supported as 65536 in the future. */ 
+          vlen_max = RVV_4096;
+          vlen_min = RVV_128;
+        }
+      unsigned int max_vlmax = (vlen_max / GET_MODE_UNIT_BITSIZE (Vmode) * get_lmulx8 (Vmode)) / 8;
+      unsigned int min_vlmax = (vlen_min / GET_MODE_UNIT_BITSIZE (Vmode) * get_lmulx8 (Vmode)) / 8;
+      
+      unsigned HOST_WIDE_INT avl_int = INTVAL (avl);
+      if (avl_int <= min_vlmax)
+        {
+          i32vl = gen_int_mode (2 * avl_int, SImode);
+        }
+      else if (avl_int >= 2 * max_vlmax)
+        {
+          // Just set i32vl to VLMAX in this situation
+          i32vl = gen_reg_rtx (Pmode);
+          unsigned int vtype = classify_vtype_field (VSImode);
+          emit_insn (gen_vsetvl (Pmode, i32vl, gen_rtx_REG (Pmode, X0_REGNUM), GEN_INT (vtype)));
+        }
+      else
+        {
+          // For AVL between (MinVLMAX, 2 * MaxVLMAX), the actual working vl
+          // is related to the hardware implementation.
+          // So let the following code handle
+        }
+    }
+  if (!i32vl)
+    {
+      // Using vsetvli instruction to get actually used length which related to
+      // the hardware implementation
+      rtx i64vl = gen_reg_rtx (Pmode);
+      unsigned int vtype = classify_vtype_field (Vmode);
+      emit_insn (gen_vsetvl (Pmode, i64vl, force_reg (Pmode, avl), GEN_INT (vtype)));
+      // scale 2 for 32-bit length
+      i32vl = gen_reg_rtx (Pmode);
+      emit_insn (gen_rtx_SET (i32vl, gen_rtx_ASHIFT (Pmode, i64vl, const1_rtx)));
+    }
+
+  return force_reg_for_over_uimm (i32vl);
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+static void
+emit_int64_to_vector_32bit (machine_mode Vmode, machine_mode VSImode,
+                            machine_mode VMSImode, rtx vd, rtx s, rtx vl,
+                            rtx tail)
+{
+  if (CONST_SCALAR_INT_P (s))
+    {
+      s = force_reg (DImode, s);
+    }
+
+  rtx hi = gen_highpart (SImode, s);
+  rtx lo = gen_lowpart (SImode, s);
+
+  rtx zero = gen_rtx_REG (SImode, X0_REGNUM);
+
+  /* make a "0101..." mask vector */
+  rtx vm1 = gen_reg_rtx (VNx4SImode);
+  emit_insn (gen_vmv_v_x_internal (VNx4SImode, vm1, const0_rtx,
+                                   force_reg (SImode, GEN_INT (0x55555555)),
+                                   zero, rvv_gen_policy ()));
+  rtx vm2 = gen_reg_rtx (VMSImode);
+  emit_insn (gen_rtx_SET (vm2, gen_lowpart (VMSImode, vm1)));
+
+  rtx vlx2 = gen_vlx2 (vl, Vmode, VSImode);
+  rtx v2 = gen_reg_rtx (VSImode);
+  emit_insn (gen_vmv_v_x_internal (VSImode, v2, const0_rtx, hi, vlx2,
+                                   rvv_gen_policy ()));
+
+  rtx vd_si = gen_reg_rtx (VSImode);
+  emit_insn (gen_vmerge_vxm_internal (VSImode, vd_si, vm2, const0_rtx, v2, lo,
+                                      vlx2, tail));
+
+  emit_insn (gen_rtx_SET (vd, gen_lowpart (Vmode, vd_si)));
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+bool
+imm32_p (rtx a)
+{
+  if (!CONST_SCALAR_INT_P (a))
+    return false;
+  unsigned HOST_WIDE_INT val = UINTVAL (a);
+  return val <= 0x7FFFFFFFULL || val >= 0xFFFFFFFF80000000ULL;
+}
+
+typedef bool imm_p (rtx);
+typedef rtx gen_3 (rtx, rtx, rtx);
+typedef rtx gen_4 (rtx, rtx, rtx, rtx);
+typedef rtx gen_5 (rtx, rtx, rtx, rtx, rtx);
+typedef rtx gen_6 (rtx, rtx, rtx, rtx, rtx, rtx);
+typedef rtx gen_7 (rtx, rtx, rtx, rtx, rtx, rtx, rtx);
+enum GEN_CLASS
+{
+  GEN_VX,
+  GEN_VX_32BIT,
+  GEN_VV
+};
+
+/* Helper functions for handling sew=64 on RV32 system. */
+enum GEN_CLASS
+modify_operands (machine_mode Vmode, machine_mode VSImode,
+                 machine_mode VMSImode, machine_mode VSUBmode, rtx *operands,
+                 bool (*imm5_p) (rtx), int i, bool reverse, unsigned int unspec)
+{
+  if (!TARGET_64BIT && VSUBmode == DImode)
+    {
+      if (imm32_p (operands[i]))
+        {
+          if (!imm5_p (operands[i]))
+            operands[i] = force_reg (SImode, operands[i]);
+          return GEN_VX_32BIT;
+        }
+      else
+        {
+          rtx result = gen_reg_rtx (Vmode);
+          rtx zero = gen_rtx_REG (SImode, X0_REGNUM);
+          rtx tail = rvv_gen_policy ();
+
+          emit_int64_to_vector_32bit (Vmode, VSImode, VMSImode, result,
+                                      operands[i], zero, tail);
+
+          operands[i] = result;
+
+          if (reverse)
+            {
+              rtx b = operands[i - 1];
+              operands[i - 1] = operands[i];
+              operands[i] = b;
+            }
+          return GEN_VV;
+        }
+    }
+  else
+    {
+      if (!imm5_p (operands[i]))
+        operands[i] = force_reg (VSUBmode, operands[i]);
+      return GEN_VX;
+    }
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+bool
+emit_op5_vmv_v_x (machine_mode Vmode, machine_mode VSImode,
+                  machine_mode VMSImode, machine_mode VSUBmode, rtx *operands,
+                  int i)
+{
+  if (!TARGET_64BIT && VSUBmode == DImode)
+    {
+      if (!imm32_p (operands[i]))
+        {
+          rtx vd = operands[1];
+          if (rtx_equal_p (vd, const0_rtx))
+            {
+              vd = operands[0];
+            }
+          emit_int64_to_vector_32bit (Vmode, VSImode, VMSImode, vd, operands[i],
+                                      operands[3], operands[4]);
+
+          emit_insn (gen_rtx_SET (operands[0], vd));
+          return true;
+        }
+    }
+  return false;
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+void
+emit_op5 (unsigned int unspec, machine_mode Vmode, machine_mode VSImode,
+          machine_mode VMSImode, machine_mode VSUBmode, rtx *operands,
+          gen_5 *gen_vx, gen_5 *gen_vx_32bit, gen_5 *gen_vv, imm_p *imm5_p,
+          int i, bool reverse)
+{
+  if (unspec == UNSPEC_VMV)
+    {
+      if (emit_op5_vmv_v_x (Vmode, VSImode, VMSImode, VSUBmode, operands, i))
+        {
+          return;
+        }
+    }
+
+  enum GEN_CLASS gen_class = modify_operands (
+      Vmode, VSImode, VMSImode, VSUBmode, operands, imm5_p, i, reverse, unspec);
+
+  gen_5 *gen = gen_class == GEN_VX   ? gen_vx
+               : gen_class == GEN_VV ? gen_vv
+                                     : gen_vx_32bit;
+
+  emit_insn (
+      (*gen) (operands[0], operands[1], operands[2], operands[3], operands[4]));
 }
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector.h b/gcc/config/riscv/riscv-vector.h
index 2c242959077..e93852e3e56 100644
--- a/gcc/config/riscv/riscv-vector.h
+++ b/gcc/config/riscv/riscv-vector.h
@@ -20,14 +20,4 @@
 
 #ifndef GCC_RISCV_VECTOR_H
 #define GCC_RISCV_VECTOR_H
-bool riscv_vector_mode_p (machine_mode);
-bool rvv_legitimate_poly_int_p (rtx);
-unsigned int rvv_offset_temporaries (bool, poly_int64);
-vlmul_field_enum rvv_classify_vlmul_field (machine_mode);
-extern unsigned int rvv_parse_vsew_field (unsigned int);
-extern unsigned int rvv_parse_vlmul_field (unsigned int);
-extern bool rvv_parse_vta_field (unsigned int);
-extern bool rvv_parse_vma_field (unsigned int);
-int rvv_regsize (machine_mode);
-opt_machine_mode rvv_get_mask_mode (machine_mode);
 #endif // GCC_RISCV_VECTOR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 2b0b76458a7..238c972de09 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -107,6 +107,7 @@
    (VL_REGNUM			66)
    (VTYPE_REGNUM		67)
    (X0_REGNUM			0)
+   (DO_NOT_UPDATE_VL_VTYPE	21)
 ])
 
 (include "predicates.md")
@@ -138,7 +139,13 @@
   (const_string "unknown"))
 
 ;; Main data type used by the insn
-(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF"
+(define_attr "mode" "unknown,none,QI,HI,SI,DI,TI,SF,DF,TF,
+  VNx8QI,VNx4HI,VNx2SI,VNx4HF,VNx2SF,VNx4QI,VNx2HI,VNx2HF,
+  VNx2QI,VNx16QI,VNx8HI,VNx4SI,VNx2DI,VNx8HF,VNx4SF,VNx2DF,
+  VNx32QI,VNx16HI,VNx8SI,VNx4DI,VNx16HF,VNx8SF,VNx4DF,
+  VNx64QI,VNx32HI,VNx16SI,VNx8DI,VNx32HF,VNx16SF,VNx8DF,
+  VNx128QI,VNx64HI,VNx32SI,VNx16DI,VNx64HF,VNx32SF,VNx16DF,
+  VNx2BI,VNx4BI,VNx8BI,VNx16BI,VNx32BI,VNx64BI,VNx128BI"
   (const_string "unknown"))
 
 ;; True if the main data type is twice the size of a word.
@@ -184,11 +191,67 @@
 ;; ghost	an instruction that produces no real code
 ;; bitmanip	bit manipulation instructions
 ;; vsetvl vector configuration setting
+;; vload vector whole register load
+;; vstore vector whole register store
+;; vcopy vector whole register copy
+;; vle vector unit-stride load
+;; vse vector unit-stride store
+;; vlse vector strided load
+;; vsse vector strided store
+;; vluxei vector unordered indexed load
+;; vloxei vector ordered indexed load
+;; vsuxei vector unordered indexed store
+;; vsoxei vector ordered indexed store
+;; vleff vector unit-stride fault-only-first load
+;; varith vector single-width integer and floating-point arithmetic instructions
+;; vadc vector single-width add-with-carry instructions with non-mask dest
+;; vmadc vector single-width add-with-carry instructions with mask dest
+;; vwarith vector widening integer and floating-point arithmetic instructions
+;; vlogical vector integer logical instructions
+;; vshift vector integer shift instructions
+;; vcmp vector integer and floating-point compare
+;; vmul vector integer and floating-point multiply
+;; vmulh vector integer highpart multiply
+;; vdiv vector integer and floating-point divide
+;; vwmul vector integer and floating-point widening multiply
+;; vmadd vector single-width integer and floating-point multiply-add/sub
+;; vwmadd vector widening integer and floating-point multiply-add/sub
+;; vmerge vector element data selection
+;; vmove vector register move
+;; vsarith vector saturating single-width arithmetic instructions
+;; vsmul vector saturating single-width multiply instructions
+;; vscaleshift vector scaling single-width shift instructions
+;; vclip vector saturating clip
+;; vfsqrt vector floating point square root
+;; vfsgnj vector floating-point sign-injection
+;; vfclass vector floating-point classify instructions
+;; vfcvt vector floating point convert
+;; vfwcvt vector widening floating point convert
+;; vfncvt vector narrowing floating point convert
+;; vwcvt vector widening only integer convert
+;; vncvt vector narrowing only integer convert
+;; vreduc vector single-width reduction operations
+;; vwreduc vector widening reduction operations
+;; vmask vector mask operations
+;; vcpop vector mask population count vpopc
+;; vmsetbit vector mask bit manipulation
+;; vid vector element index instruction
+;; vmv_x_s vmv.x.s instruction
+;; vmv_s_x vmv.s.x instruction
+;; vfmv_f_s vfmv.f.s instruction
+;; vfmv_s_f vfmv.s.f instruction
+;; vslide vector slide instrucions
+;; vgather vector gather instrucions
+;; vcompress vector compress instrucions
 (define_attr "type"
   "unknown,branch,jump,call,load,fpload,store,fpstore,
    mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
    fmadd,fdiv,fcmp,fcvt,fsqrt,multi,auipc,sfb_alu,nop,ghost,bitmanip,rotate,
-   vsetvl"
+   vsetvl,vload,vstore,vcopy,vle,vse,vlse,vsse,vluxei,vloxei,vsuxei,vsoxei,vleff,
+   varith,vadc,vmadc,vwarith,vlogical,vshift,vcmp,vmul,vmulh,vdiv,vwmul,vmadd,vwmadd,
+   vmerge,vmove,vsarith,vsmul,vscaleshift,vclip,vfsqrt,vfsgnj,vfclass,vfcvt,vfwcvt,vfncvt,
+   vwcvt,vncvt,vreduc,vwreduc,vmask,vcpop,vmsetbit,viota,vid,vmv_x_s,vmv_s_x,vfmv_f_s,vfmv_s_f,
+   vslide,vgather,vcompress"
   (cond [(eq_attr "got" "load") (const_string "load")
 
 	 ;; If a doubleword move uses these expensive instructions,
diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv
index 9b0da73f3b5..278f3a0ba82 100644
--- a/gcc/config/riscv/t-riscv
+++ b/gcc/config/riscv/t-riscv
@@ -63,6 +63,10 @@ riscv-vector-builtins.o: \
 	$(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \
 		$(srcdir)/config/riscv/riscv-vector-builtins.cc
 
+riscv-insert-vsetvl.o: $(srcdir)/config/riscv/riscv-insert-vsetvl.cc
+	$(COMPILE) $<
+	$(POSTCOMPILE)
+
 PASSES_EXTRA += $(srcdir)/config/riscv/riscv-passes.def
 
 $(common_out_file): $(srcdir)/config/riscv/riscv-cores.def \
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 3e0699de86c..9832d2adaa3 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -21,11 +21,138 @@
 (define_c_enum "unspec" [
   ;; vsetvli.
   UNSPEC_VSETVLI
+  ;; RVV instructions.
+  UNSPEC_RVV
+  ;; vector select
+  UNSPEC_SELECT
+  
+  ;; vle/vse
+  UNSPEC_UNIT_STRIDE_LOAD
+  UNSPEC_UNIT_STRIDE_STORE
+  
+  ;; unspec merge
+  UNSPEC_MERGE
+  
+  UNSPEC_VMV
 ])
 
+;; All vector modes supported.
+(define_mode_iterator V [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI VNx128QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx2SI VNx4SI VNx8SI VNx16SI VNx32SI
+  VNx2DI VNx4DI VNx8DI VNx16DI
+  (VNx2SF "TARGET_HARD_FLOAT") (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")
+  (VNx16DF "TARGET_DOUBLE_FLOAT")])
+  
 ;; All integer vector modes supported for RVV.
 (define_mode_iterator VI [
   VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI VNx128QI
   VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
   VNx2SI VNx4SI VNx8SI VNx16SI VNx32SI
-  VNx2DI VNx4DI VNx8DI VNx16DI])
\ No newline at end of file
+  VNx2DI VNx4DI VNx8DI VNx16DI])
+
+;; All vector modes supported for float load/store/alu.
+(define_mode_iterator VF [
+  (VNx2SF "TARGET_HARD_FLOAT") (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")
+  (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+;; All vector masking modes.
+(define_mode_iterator VB [
+  VNx2BI VNx4BI VNx8BI VNx16BI
+  VNx32BI VNx64BI VNx128BI])
+
+;; Full vector modes supported.
+(define_mode_iterator VFULL [
+  VNx16QI VNx32QI VNx64QI VNx128QI
+  VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx4SI VNx8SI VNx16SI VNx32SI
+  VNx2DI VNx4DI VNx8DI VNx16DI
+  (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT") (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT") (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+;; Paritial vector modes supported.
+(define_mode_iterator VPARTIAL [
+  VNx2QI VNx4QI VNx8QI
+  VNx2HI VNx4HI
+  VNx2SI
+  (VNx2SF "TARGET_HARD_FLOAT")])
+
+;; All vector modes supported for integer sew = 64.
+(define_mode_iterator V64BITI [VNx2DI VNx4DI VNx8DI VNx16DI])
+
+;; Map a vector int or float mode to a vector compare mode.
+(define_mode_attr VM [
+  (VNx2BI "VNx2BI") (VNx4BI "VNx4BI") (VNx8BI "VNx8BI") (VNx16BI "VNx16BI")
+  (VNx32BI "VNx32BI") (VNx64BI "VNx64BI") (VNx128BI "VNx128BI")
+  (VNx2QI "VNx2BI") (VNx4QI "VNx4BI") (VNx8QI "VNx8BI") (VNx16QI "VNx16BI")
+  (VNx32QI "VNx32BI") (VNx64QI "VNx64BI") (VNx128QI "VNx128BI") (VNx2HI "VNx2BI")
+  (VNx4HI "VNx4BI") (VNx8HI "VNx8BI") (VNx16HI "VNx16BI") (VNx32HI "VNx32BI")
+  (VNx64HI "VNx64BI") (VNx2SI "VNx2BI") (VNx4SI "VNx4BI") (VNx8SI "VNx8BI")
+  (VNx16SI "VNx16BI") (VNx32SI "VNx32BI") (VNx2DI "VNx2BI") (VNx4DI "VNx4BI")
+  (VNx8DI "VNx8BI") (VNx16DI "VNx16BI")
+  (VNx2SF "VNx2BI") (VNx4SF "VNx4BI") (VNx8SF "VNx8BI") (VNx16SF "VNx16BI")
+  (VNx32SF "VNx32BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI")
+  (VNx16DF "VNx16BI")
+  (VNx2QI "VNx2BI") (VNx4QI "VNx4BI") (VNx8QI "VNx8BI") (VNx16QI "VNx16BI")
+  (VNx32QI "VNx32BI") (VNx64QI "VNx64BI") (VNx128QI "VNx128BI") (VNx2HI "VNx2BI")
+  (VNx4HI "VNx4BI") (VNx8HI "VNx8BI") (VNx16HI "VNx16BI") (VNx32HI "VNx32BI")
+  (VNx64HI "VNx64BI") (VNx2SI "VNx2BI") (VNx4SI "VNx4BI") (VNx8SI "VNx8BI")
+  (VNx16SI "VNx16BI") (VNx32SI "VNx32BI") (VNx2DI "VNx2BI") (VNx4DI "VNx4BI")
+  (VNx8DI "VNx8BI") (VNx16DI "VNx16BI")
+  (VNx2SF "VNx2BI") (VNx4SF "VNx4BI") (VNx8SF "VNx8BI") (VNx16SF "VNx16BI")
+  (VNx32SF "VNx32BI") (VNx2DF "VNx2BI") (VNx4DF "VNx4BI") (VNx8DF "VNx8BI")
+  (VNx16DF "VNx16BI")])
+
+;; Map a vector mode to its element mode.
+(define_mode_attr VSUB [
+  (VNx2QI "QI") (VNx4QI "QI") (VNx8QI "QI") (VNx16QI "QI")
+  (VNx32QI "QI") (VNx64QI "QI") (VNx128QI "QI") (VNx2HI "HI")
+  (VNx4HI "HI") (VNx8HI "HI") (VNx16HI "HI") (VNx32HI "HI")
+  (VNx64HI "HI") (VNx2SI "SI") (VNx4SI "SI") (VNx8SI "SI")
+  (VNx16SI "SI") (VNx32SI "SI") (VNx2DI "DI") (VNx4DI "DI")
+  (VNx8DI "DI") (VNx16DI "DI")
+  (VNx2SF "SF") (VNx4SF "SF") (VNx8SF "SF") (VNx16SF "SF")
+  (VNx32SF "SF") (VNx2DF "DF") (VNx4DF "DF") (VNx8DF "DF")
+  (VNx16DF "DF")])
+
+(define_mode_attr VDI_TO_VSI [
+  (VNx2QI "VNx4SI") (VNx4QI "VNx4SI") (VNx8QI "VNx4SI") (VNx16QI "VNx4SI") (VNx32QI "VNx4SI") (VNx64QI "VNx4SI") (VNx128QI "VNx4SI")
+  (VNx2HI "VNx4SI") (VNx4HI "VNx4SI") (VNx8HI "VNx4SI") (VNx16HI "VNx4SI") (VNx32HI "VNx4SI") (VNx64HI "VNx4SI")
+  (VNx2SI "VNx4SI") (VNx4SI "VNx4SI") (VNx8SI "VNx4SI") (VNx16SI "VNx4SI") (VNx32SI "VNx4SI")
+  (VNx2DI "VNx4SI") (VNx4DI "VNx8SI") (VNx8DI "VNx16SI") (VNx16DI "VNx32SI")])
+
+(define_mode_attr VDI_TO_VSI_VM [
+  (VNx2QI "VNx4BI") (VNx4QI "VNx4BI") (VNx8QI "VNx4BI") (VNx16QI "VNx4BI")
+  (VNx32QI "VNx4BI") (VNx64QI "VNx4BI") (VNx128QI "VNx4BI")
+  (VNx2HI "VNx4BI") (VNx4HI "VNx4BI") (VNx8HI "VNx4BI") (VNx16HI "VNx4BI") (VNx32HI "VNx4BI")
+  (VNx64HI "VNx64BI")
+  (VNx2SI "VNx4BI") (VNx4SI "VNx4BI") (VNx8SI "VNx4BI") (VNx16SI "VNx4BI") (VNx32SI "VNx4BI")
+  (VNx2DI "VNx4BI") (VNx4DI "VNx8BI") (VNx8DI "VNx16BI") (VNx16DI "VNx32BI")
+])
+
+(define_mode_attr vi_to_v64biti [
+  (VNx2QI "vnx2di") (VNx4QI "vnx2di") (VNx8QI "vnx2di") (VNx16QI "vnx2di") (VNx32QI "vnx2di") (VNx64QI "vnx2di") (VNx128QI "vnx2di")
+  (VNx2HI "vnx2di") (VNx4HI "vnx2di") (VNx8HI "vnx2di") (VNx16HI "vnx2di") (VNx32HI "vnx2di") (VNx64HI "vnx2di")
+  (VNx2SI "vnx2di") (VNx4SI "vnx2di") (VNx8SI "vnx2di") (VNx16SI "vnx2di") (VNx32SI "vnx2di")
+  (VNx2DI "vnx2di") (VNx4DI "vnx4di") (VNx8DI "vnx8di") (VNx16DI "vnx16di")])
+  
+(define_int_iterator VMVOP [
+  UNSPEC_VMV
+])
+
+(define_int_attr vxoptab [
+  (UNSPEC_VMV "mv")
+])
+
+(define_int_attr VXOPTAB [
+  (UNSPEC_VMV "UNSPEC_VMV")
+])
+
+(define_int_attr immptab [
+  (UNSPEC_VMV "Ws5")
+])
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 31fdec981b9..4a9c6769812 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -26,6 +26,43 @@
 ;; - RVV intrinsic implmentation (Document:https://github.com/riscv/rvv-intrinsic-doc)
 
 (include "vector-iterators.md")
+
+;; =========================================================================
+;; == Vector creation
+;; =========================================================================
+
+;; -------------------------------------------------------------------------
+;; ---- [INT,FP] Vector Creation
+;; -------------------------------------------------------------------------
+;; Includes:
+;; - Duplicate element to a vector
+;; - Initialize from individual elements
+;; -------------------------------------------------------------------------
+
+;; vector integer modes vec_duplicate.
+(define_expand "@vec_duplicate<mode>"
+  [(match_operand:VI 0 "register_operand")
+   (match_operand:<VSUB> 1 "reg_or_simm5_operand")]
+  "TARGET_VECTOR"
+{
+  emit_insn (gen_v_v_x (UNSPEC_VMV, <MODE>mode, 
+      operands[0], const0_rtx, operands[1],
+      gen_rtx_REG (Pmode, X0_REGNUM), rvv_gen_policy ()));
+  DONE;
+})
+
+;; vector floating-point modes vec_duplicate.
+(define_expand "@vec_duplicate<mode>"
+  [(match_operand:VF 0 "register_operand")
+   (match_operand:<VSUB> 1 "register_operand")]
+  "TARGET_VECTOR"
+{
+  emit_insn (gen_vfmv_v_f (<MODE>mode, operands[0], const0_rtx, 
+      operands[1], gen_rtx_REG (Pmode, X0_REGNUM),
+      rvv_gen_policy ()));
+  DONE;
+})
+   
 ;; ===============================================================================
 ;; == Intrinsics
 ;; ===============================================================================
@@ -137,4 +174,200 @@
     return "";
   }
   [(set_attr "type" "vsetvl")
-   (set_attr "mode" "none")])
\ No newline at end of file
+   (set_attr "mode" "none")])
+
+;; -------------------------------------------------------------------------------
+;; ---- 7. Vector Loads and Stores
+;; -------------------------------------------------------------------------------
+;; Includes:
+;; - 7.4. Vector Unit-Stride Instructions
+;; - 7.5. Vector Strided Instructions
+;; - 7.6. Vector Indexed Instructions
+;; - 7.7. Unit-stride Fault-Only-First Instructions
+;; - 7.8. Vector Load/Store Segment Instructions
+;;  -  7.8.1. Vector Unit-Stride Segment Loads and Stores
+;;  -  7.8.2. Vector Strided Segment Loads and Stores
+;;  -  7.8.3. Vector Indexed Segment Loads and Stores
+;; -------------------------------------------------------------------------------
+
+;; Vector Unit-Stride Loads.
+(define_insn "@vle<mode>"
+  [(set (match_operand:V 0 "register_operand"                 "=vd,vd,  vr,vr")
+    (unspec:V
+      [(unspec:V
+        [(match_operand:<VM> 1 "vector_reg_or_const0_operand" "vm,vm,   J,J")
+          (unspec:V
+            [(match_operand 3 "pmode_register_operand"        "r,r,     r,r")
+            (mem:BLK (scratch))] UNSPEC_UNIT_STRIDE_LOAD)
+         (match_operand:V 2 "vector_reg_or_const0_operand"    "0,J,     0,J")] UNSPEC_SELECT)
+      (match_operand 4 "p_reg_or_const_csr_operand"           "rK,rK,   rK,rK")
+      (match_operand 5 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vle<sew>.v\t%0,(%3),%1.t
+   vle<sew>.v\t%0,(%3),%1.t
+   vle<sew>.v\t%0,(%3)
+   vle<sew>.v\t%0,(%3)"
+  [(set_attr "type" "vle")
+   (set_attr "mode" "<MODE>")])
+
+;; Vector Unit-Stride Stores.
+(define_insn "@vse<mode>"
+  [(set (mem:BLK (scratch))
+    (unspec:BLK
+      [(unspec:V
+        [(match_operand:<VM> 0 "vector_reg_or_const0_operand" "vm,J")
+         (unspec:BLK
+           [(match_operand 1 "pmode_register_operand"         "r,r")
+            (match_operand:V 2 "register_operand"             "vr,vr")
+            (mem:BLK (scratch))] UNSPEC_UNIT_STRIDE_STORE)
+         (match_dup 1)] UNSPEC_SELECT)
+      (match_operand 3 "p_reg_or_const_csr_operand"           "rK,rK")
+      (match_operand 4 "const_int_operand")
+      (reg:SI VL_REGNUM)
+      (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "@
+   vse<sew>.v\t%2,(%1),%0.t
+   vse<sew>.v\t%2,(%1)"
+  [(set_attr "type" "vse")
+   (set_attr "mode" "<MODE>")])
+
+;; Vector Unit-stride mask Loads.
+(define_insn "@vlm<mode>"
+  [(set (match_operand:VB 0 "register_operand"    "=vr")
+  (unspec:VB
+    [(unspec:VB
+      [(match_operand 1 "pmode_register_operand"  "r")
+       (mem:BLK (scratch))] UNSPEC_UNIT_STRIDE_LOAD)
+    (match_operand 2 "p_reg_or_const_csr_operand" "rK")
+    (match_operand 3 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "vlm.v\t%0,(%1)"
+  [(set_attr "type" "vle")
+   (set_attr "mode" "<MODE>")])
+
+;; Vector Unit-stride mask Stores.
+(define_insn "@vsm<mode>"
+  [(set (mem:BLK (scratch))
+  (unspec:BLK
+    [(unspec:BLK
+      [(match_operand 0 "pmode_register_operand"  "r")
+       (match_operand:VB 1 "register_operand"     "vr")
+       (mem:BLK (scratch))] UNSPEC_UNIT_STRIDE_STORE)
+    (match_operand 2 "p_reg_or_const_csr_operand" "rK")
+    (match_operand 3 "const_int_operand")
+    (reg:SI VL_REGNUM)
+    (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+  "TARGET_VECTOR"
+  "vsm.v\t%1,(%0)"
+  [(set_attr "type" "vse")
+   (set_attr "mode" "<MODE>")])
+
+;; vmv.v.x
+(define_expand "@v<vxoptab><mode>_v_x"
+  [(unspec [
+    (match_operand:VI 0 "register_operand")
+    (match_operand:VI 1 "vector_reg_or_const0_operand")
+    (match_operand:<VSUB> 2 "reg_or_const_int_operand")
+    (match_operand 3 "p_reg_or_const_csr_operand")
+    (match_operand 4 "const_int_operand")
+   ] VMVOP)]
+  "TARGET_VECTOR"
+  {
+    emit_op5 (
+      <VXOPTAB>,
+      <MODE>mode, <VDI_TO_VSI>mode, <VDI_TO_VSI_VM>mode,
+      <VSUB>mode,
+      operands,
+      gen_v<vxoptab><mode>_v_x_internal,
+      gen_v<vxoptab><vi_to_v64biti>_v_x_32bit,
+      NULL,
+      satisfies_constraint_<immptab>,
+      2, false
+    );
+    DONE;
+  }
+)
+
+;; Vector-Scalar Integer Move.
+(define_insn "@vmv<mode>_v_x_internal"
+  [(set (match_operand:VI 0 "register_operand"          "=vr,vr,vr,vr")
+  (unspec:VI
+    [(match_operand:VI 1 "vector_reg_or_const0_operand" "0,0,J,J")
+     (vec_duplicate:VI
+      (match_operand:<VSUB> 2 "reg_or_simm5_operand"  "r,Ws5,r,Ws5"))
+     (match_operand 3 "p_reg_or_const_csr_operand"      "rK,rK,rK,rK")
+     (match_operand 4 "const_int_operand")
+     (reg:SI VL_REGNUM)
+     (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+ "TARGET_VECTOR"
+ "@
+  vmv.v.x\t%0,%2
+  vmv.v.i\t%0,%2
+  vmv.v.x\t%0,%2
+  vmv.v.i\t%0,%2"
+ [(set_attr "type" "vmove")
+  (set_attr "mode" "<MODE>")])
+
+(define_insn "@vmv<mode>_v_x_32bit"
+  [(set (match_operand:V64BITI 0 "register_operand"                     "=vr,vr,vr,vr")
+  (unspec:V64BITI
+    [(match_operand:V64BITI 1 "vector_reg_or_const0_operand"            "0,0,J,J")
+     (vec_duplicate:V64BITI
+        (sign_extend:<VSUB> (match_operand:SI 2 "reg_or_simm5_operand"  "r,Ws5,r,Ws5")))
+     (match_operand:SI 3 "csr_operand"                                  "rK,rK,rK,rK")
+     (match_operand:SI 4 "const_int_operand")
+     (reg:SI VL_REGNUM)
+     (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+ "TARGET_VECTOR"
+ "@
+  vmv.v.x\t%0,%2
+  vmv.v.i\t%0,%2
+  vmv.v.x\t%0,%2
+  vmv.v.i\t%0,%2"
+ [(set_attr "type" "vmove")
+  (set_attr "mode" "<MODE>")])
+
+;; Vector-Scalar Floating-Point Move.
+(define_insn "@vfmv<mode>_v_f"
+  [(set (match_operand:VF 0 "register_operand"          "=vr,vr")
+  (unspec:VF
+    [(match_operand:VF 1 "vector_reg_or_const0_operand" "0,J")
+     (vec_duplicate:VF
+       (match_operand:<VSUB> 2 "register_operand"       "f,f"))
+     (match_operand 3 "p_reg_or_const_csr_operand"      "rK,rK")
+     (match_operand 4 "const_int_operand")
+     (reg:SI VL_REGNUM)
+     (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+ "TARGET_VECTOR"
+ "vfmv.v.f\t%0,%2"
+ [(set_attr "type" "vmove")
+  (set_attr "mode" "<MODE>")])
+
+;; Vector-Scalar integer merge.
+(define_insn "@vmerge<mode>_vxm_internal"
+  [(set (match_operand:VI 0 "register_operand"              "=vd,vd,vd,vd")
+    (unspec:VI
+      [(match_operand:VI 2 "vector_reg_or_const0_operand"   "0,0,J,J")
+        (unspec:VI
+          [(match_operand:<VM> 1 "register_operand"         "vm,vm,vm,vm")
+           (match_operand:VI 3 "register_operand"           "vr,vr,vr,vr")
+           (vec_duplicate:VI
+             (match_operand:<VSUB> 4 "reg_or_simm5_operand" "r,Ws5,r,Ws5"))] UNSPEC_MERGE)
+       (match_operand 5 "p_reg_or_const_csr_operand"        "rK,rK,rK,rK")
+       (match_operand 6 "const_int_operand")
+       (reg:SI VL_REGNUM)
+       (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+ "TARGET_VECTOR"
+ "@
+  vmerge.vxm\t%0,%3,%4,%1
+  vmerge.vim\t%0,%3,%4,%1
+  vmerge.vxm\t%0,%3,%4,%1
+  vmerge.vim\t%0,%3,%4,%1"
+ [(set_attr "type" "vmerge")
+  (set_attr "mode" "<MODE>")])
\ No newline at end of file
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 07/21]   Add register spilling support
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (5 preceding siblings ...)
  2022-05-31  8:49 ` [PATCH 06/21] Add insert-vsetvl pass juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:49 ` [PATCH 08/21] Add poly manipulation juzhe.zhong
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-protos.h (rvv_expand_const_vector): New function.
        (rvv_expand_const_mask): New function.
        (rvv_const_vec_all_same_in_range_p): New function.
        * config/riscv/riscv-vector.cc (classify_vtype_field): Move codes location.
        (get_lmulx8): New function. Move codes location.
        (force_reg_for_over_uimm): New function. Move codes location.
        (gen_vlx2): New function. Move codes location.
        (emit_int64_to_vector_32bit): Move codes location.
        (rvv_expand_const_vector): New function.
        (rvv_expand_const_mask): New function.
        (rvv_const_vec_all_same_in_range_p): New function.
        * config/riscv/riscv.cc (riscv_const_insns): Add const vector cost.
        * config/riscv/vector-iterators.md: New iterators and attributes.
        * config/riscv/vector.md (mov<mode>): New pattern.
        (*mov<mode>): New pattern.
        (*mov<mode>_reg): New pattern.
        (@vmclr<mode>_m): New pattern.
        (@vmset<mode>_m): New pattern.
        
---
 gcc/config/riscv/riscv-protos.h      |   3 +
 gcc/config/riscv/riscv-vector.cc     | 349 ++++++++++++++++-----------
 gcc/config/riscv/riscv.cc            |  67 ++++-
 gcc/config/riscv/vector-iterators.md |  24 ++
 gcc/config/riscv/vector.md           | 201 +++++++++++++++
 5 files changed, 502 insertions(+), 142 deletions(-)

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 9a7e120854a..618eb746eaa 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -128,6 +128,9 @@ extern int rvv_regsize (machine_mode);
 extern rtx rvv_gen_policy (unsigned int rvv_policy = 0);
 extern opt_machine_mode rvv_get_mask_mode (machine_mode);
 extern machine_mode rvv_translate_attr_mode (rtx_insn *);
+extern bool rvv_expand_const_vector (rtx, rtx);
+extern bool rvv_expand_const_mask (rtx, rtx);
+extern bool rvv_const_vec_all_same_in_range_p (rtx, HOST_WIDE_INT, HOST_WIDE_INT);
 extern void
 emit_op5 (
   unsigned int unspec,
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index 426490945dd..4b2fe2a8d11 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -71,7 +71,165 @@
 #include "target-def.h"
 
 #include <string.h>
-/* Helper functions for RVV */
+
+/* Internal helper functions for RVV */
+
+/* Return the vtype field for a specific machine mode. */
+static unsigned int
+classify_vtype_field (machine_mode mode)
+{
+  unsigned int vlmul = rvv_classify_vlmul_field (mode);
+  unsigned int vsew = rvv_classify_vsew_field (mode);
+  unsigned int vtype = (vsew << 3) | (vlmul & 0x7) | 0x40;
+  return vtype;
+}
+
+/* lmul = real_lmul * 8
+   guarantee integer
+   e.g.
+     1  => 1/8
+     2  => 1/4
+     4  => 1/2
+     8  => 1
+     16 => 2
+     32 => 4
+     64 => 8
+ */
+static unsigned int
+get_lmulx8 (machine_mode mode)
+{
+  unsigned int vlmul = rvv_classify_vlmul_field (mode);
+  switch (vlmul)
+    {
+      case VLMUL_FIELD_000:
+        return 8;
+      case VLMUL_FIELD_001:
+        return 16;
+      case VLMUL_FIELD_010:
+        return 32;
+      case VLMUL_FIELD_011:
+        return 64;
+      case VLMUL_FIELD_101:
+        return 1;
+      case VLMUL_FIELD_110:
+        return 2;
+      case VLMUL_FIELD_111:
+        return 4;
+      default:
+        gcc_unreachable ();
+    }
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+static rtx
+force_reg_for_over_uimm (rtx vl)
+{
+  if (CONST_SCALAR_INT_P (vl) && INTVAL (vl) >= 32)
+    {
+      return force_reg (Pmode, vl);
+    }
+
+  return vl;
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+static rtx
+gen_vlx2 (rtx avl, machine_mode Vmode, machine_mode VSImode)
+{
+  if (rtx_equal_p (avl, gen_rtx_REG (Pmode, X0_REGNUM)))
+    {
+      return avl;
+    }
+  rtx i32vl = NULL_RTX;
+  if (CONST_SCALAR_INT_P (avl))
+    {
+      unsigned int vlen_max;
+      unsigned int vlen_min;
+      if (riscv_vector_chunks.is_constant ())
+        {
+          vlen_max = riscv_vector_chunks.to_constant () * 64;
+          vlen_min = vlen_max;
+        }
+      else
+        {
+          /* TODO: vlen_max will be supported as 65536 in the future. */ 
+          vlen_max = RVV_4096;
+          vlen_min = RVV_128;
+        }
+      unsigned int max_vlmax = (vlen_max / GET_MODE_UNIT_BITSIZE (Vmode) * get_lmulx8 (Vmode)) / 8;
+      unsigned int min_vlmax = (vlen_min / GET_MODE_UNIT_BITSIZE (Vmode) * get_lmulx8 (Vmode)) / 8;
+      
+      unsigned HOST_WIDE_INT avl_int = INTVAL (avl);
+      if (avl_int <= min_vlmax)
+        {
+          i32vl = gen_int_mode (2 * avl_int, SImode);
+        }
+      else if (avl_int >= 2 * max_vlmax)
+        {
+          // Just set i32vl to VLMAX in this situation
+          i32vl = gen_reg_rtx (Pmode);
+          unsigned int vtype = classify_vtype_field (VSImode);
+          emit_insn (gen_vsetvl (Pmode, i32vl, gen_rtx_REG (Pmode, X0_REGNUM), GEN_INT (vtype)));
+        }
+      else
+        {
+          // For AVL between (MinVLMAX, 2 * MaxVLMAX), the actual working vl
+          // is related to the hardware implementation.
+          // So let the following code handle
+        }
+    }
+  if (!i32vl)
+    {
+      // Using vsetvli instruction to get actually used length which related to
+      // the hardware implementation
+      rtx i64vl = gen_reg_rtx (Pmode);
+      unsigned int vtype = classify_vtype_field (Vmode);
+      emit_insn (gen_vsetvl (Pmode, i64vl, force_reg (Pmode, avl), GEN_INT (vtype)));
+      // scale 2 for 32-bit length
+      i32vl = gen_reg_rtx (Pmode);
+      emit_insn (gen_rtx_SET (i32vl, gen_rtx_ASHIFT (Pmode, i64vl, const1_rtx)));
+    }
+
+  return force_reg_for_over_uimm (i32vl);
+}
+
+/* Helper functions for handling sew=64 on RV32 system. */
+static void
+emit_int64_to_vector_32bit (machine_mode Vmode, machine_mode VSImode,
+                            machine_mode VMSImode, rtx vd, rtx s, rtx vl,
+                            rtx tail)
+{
+  if (CONST_SCALAR_INT_P (s))
+    {
+      s = force_reg (DImode, s);
+    }
+
+  rtx hi = gen_highpart (SImode, s);
+  rtx lo = gen_lowpart (SImode, s);
+
+  rtx zero = gen_rtx_REG (SImode, X0_REGNUM);
+
+  /* make a "0101..." mask vector */
+  rtx vm1 = gen_reg_rtx (VNx4SImode);
+  emit_insn (gen_vmv_v_x_internal (VNx4SImode, vm1, const0_rtx,
+                                   force_reg (SImode, GEN_INT (0x55555555)),
+                                   zero, rvv_gen_policy ()));
+  rtx vm2 = gen_reg_rtx (VMSImode);
+  emit_insn (gen_rtx_SET (vm2, gen_lowpart (VMSImode, vm1)));
+
+  rtx vlx2 = gen_vlx2 (vl, Vmode, VSImode);
+  rtx v2 = gen_reg_rtx (VSImode);
+  emit_insn (gen_vmv_v_x_internal (VSImode, v2, const0_rtx, hi, vlx2,
+                                   rvv_gen_policy ()));
+
+  rtx vd_si = gen_reg_rtx (VSImode);
+  emit_insn (gen_vmerge_vxm_internal (VSImode, vd_si, vm2, const0_rtx, v2, lo,
+                                      vlx2, tail));
+
+  emit_insn (gen_rtx_SET (vd, gen_lowpart (Vmode, vd_si)));
+}
+
+/* Globaer RVV implementation. */
 
 /* Return true if it is a RVV mask mode. */
 bool
@@ -370,159 +528,68 @@ rvv_translate_attr_mode (rtx_insn *insn)
   return VOIDmode;
 }
 
-/* Return the vtype field for a specific machine mode. */
-static unsigned int
-classify_vtype_field (machine_mode mode)
-{
-  unsigned int vlmul = rvv_classify_vlmul_field (mode);
-  unsigned int vsew = rvv_classify_vsew_field (mode);
-  unsigned int vtype = (vsew << 3) | (vlmul & 0x7) | 0x40;
-  return vtype;
-}
-
-/* lmul = real_lmul * 8
-   guarantee integer
-   e.g.
-     1  => 1/8
-     2  => 1/4
-     4  => 1/2
-     8  => 1
-     16 => 2
-     32 => 4
-     64 => 8
- */
-static unsigned int
-get_lmulx8 (machine_mode mode)
-{
-  unsigned int vlmul = rvv_classify_vlmul_field (mode);
-  switch (vlmul)
-    {
-      case VLMUL_FIELD_000:
-        return 8;
-      case VLMUL_FIELD_001:
-        return 16;
-      case VLMUL_FIELD_010:
-        return 32;
-      case VLMUL_FIELD_011:
-        return 64;
-      case VLMUL_FIELD_101:
-        return 1;
-      case VLMUL_FIELD_110:
-        return 2;
-      case VLMUL_FIELD_111:
-        return 4;
-      default:
-        gcc_unreachable ();
-    }
-}
-
-/* Helper functions for handling sew=64 on RV32 system. */
-static rtx
-force_reg_for_over_uimm (rtx vl)
+/* Expand const vector using RVV instructions. */
+bool
+rvv_expand_const_vector (rtx target, rtx src)
 {
-  if (CONST_SCALAR_INT_P (vl) && INTVAL (vl) >= 32)
+  rtx x;
+  machine_mode mode = GET_MODE (target);
+  machine_mode inner_mode = GET_MODE_INNER (mode);
+  
+  /* Case 1: Handle const duplicate vector. */
+  if (const_vec_duplicate_p (src, &x))
     {
-      return force_reg (Pmode, vl);
+      if (FLOAT_MODE_P (mode))
+        x = force_reg (inner_mode, x);
+      emit_insn (gen_vec_duplicate (mode, target, x));
+      return true;
     }
-
-  return vl;
+  /* TODO: In case of intrinsic support, we only need to deal with const duplicate vector. 
+     More cases will be supported for auto-vectorization. */
+  return false;
 }
 
-/* Helper functions for handling sew=64 on RV32 system. */
-static rtx
-gen_vlx2 (rtx avl, machine_mode Vmode, machine_mode VSImode)
+/* Expand const mask using RVV instructions. */
+bool
+rvv_expand_const_mask (rtx target, rtx src)
 {
-  if (rtx_equal_p (avl, gen_rtx_REG (Pmode, X0_REGNUM)))
+  rtx ele;
+  rtx zero = gen_rtx_REG (Pmode, X0_REGNUM);
+  machine_mode mode = GET_MODE (target);
+  if (const_vec_duplicate_p (src, &ele))
     {
-      return avl;
-    }
-  rtx i32vl = NULL_RTX;
-  if (CONST_SCALAR_INT_P (avl))
-    {
-      unsigned int vlen_max;
-      unsigned int vlen_min;
-      if (riscv_vector_chunks.is_constant ())
-        {
-          vlen_max = riscv_vector_chunks.to_constant () * 64;
-          vlen_min = vlen_max;
-        }
-      else
-        {
-          /* TODO: vlen_max will be supported as 65536 in the future. */ 
-          vlen_max = RVV_4096;
-          vlen_min = RVV_128;
-        }
-      unsigned int max_vlmax = (vlen_max / GET_MODE_UNIT_BITSIZE (Vmode) * get_lmulx8 (Vmode)) / 8;
-      unsigned int min_vlmax = (vlen_min / GET_MODE_UNIT_BITSIZE (Vmode) * get_lmulx8 (Vmode)) / 8;
-      
-      unsigned HOST_WIDE_INT avl_int = INTVAL (avl);
-      if (avl_int <= min_vlmax)
-        {
-          i32vl = gen_int_mode (2 * avl_int, SImode);
-        }
-      else if (avl_int >= 2 * max_vlmax)
-        {
-          // Just set i32vl to VLMAX in this situation
-          i32vl = gen_reg_rtx (Pmode);
-          unsigned int vtype = classify_vtype_field (VSImode);
-          emit_insn (gen_vsetvl (Pmode, i32vl, gen_rtx_REG (Pmode, X0_REGNUM), GEN_INT (vtype)));
-        }
-      else
+      gcc_assert (CONST_SCALAR_INT_P (ele));
+      switch (INTVAL (ele))
         {
-          // For AVL between (MinVLMAX, 2 * MaxVLMAX), the actual working vl
-          // is related to the hardware implementation.
-          // So let the following code handle
+        case 0:
+          emit_insn (gen_vmclr_m (mode, target, zero,
+                                  rvv_gen_policy ()));
+          break;
+        case 1:
+          emit_insn (gen_vmset_m (mode, target, zero,
+                                  rvv_gen_policy ()));
+          break;
+        default:
+          gcc_unreachable ();
         }
+      return true;
     }
-  if (!i32vl)
-    {
-      // Using vsetvli instruction to get actually used length which related to
-      // the hardware implementation
-      rtx i64vl = gen_reg_rtx (Pmode);
-      unsigned int vtype = classify_vtype_field (Vmode);
-      emit_insn (gen_vsetvl (Pmode, i64vl, force_reg (Pmode, avl), GEN_INT (vtype)));
-      // scale 2 for 32-bit length
-      i32vl = gen_reg_rtx (Pmode);
-      emit_insn (gen_rtx_SET (i32vl, gen_rtx_ASHIFT (Pmode, i64vl, const1_rtx)));
-    }
-
-  return force_reg_for_over_uimm (i32vl);
+  
+  /* TODO: In case of intrinsic support, we only need to deal with const all zeros or const all ones mask. 
+     More cases will be supported for auto-vectorization. */
+  return false;
 }
 
-/* Helper functions for handling sew=64 on RV32 system. */
-static void
-emit_int64_to_vector_32bit (machine_mode Vmode, machine_mode VSImode,
-                            machine_mode VMSImode, rtx vd, rtx s, rtx vl,
-                            rtx tail)
-{
-  if (CONST_SCALAR_INT_P (s))
-    {
-      s = force_reg (DImode, s);
-    }
-
-  rtx hi = gen_highpart (SImode, s);
-  rtx lo = gen_lowpart (SImode, s);
-
-  rtx zero = gen_rtx_REG (SImode, X0_REGNUM);
-
-  /* make a "0101..." mask vector */
-  rtx vm1 = gen_reg_rtx (VNx4SImode);
-  emit_insn (gen_vmv_v_x_internal (VNx4SImode, vm1, const0_rtx,
-                                   force_reg (SImode, GEN_INT (0x55555555)),
-                                   zero, rvv_gen_policy ()));
-  rtx vm2 = gen_reg_rtx (VMSImode);
-  emit_insn (gen_rtx_SET (vm2, gen_lowpart (VMSImode, vm1)));
+/* Return true if X is a const_vector with all duplicate elements, which is in
+   the range between MINVAL and MAXVAL.  */
 
-  rtx vlx2 = gen_vlx2 (vl, Vmode, VSImode);
-  rtx v2 = gen_reg_rtx (VSImode);
-  emit_insn (gen_vmv_v_x_internal (VSImode, v2, const0_rtx, hi, vlx2,
-                                   rvv_gen_policy ()));
-
-  rtx vd_si = gen_reg_rtx (VSImode);
-  emit_insn (gen_vmerge_vxm_internal (VSImode, vd_si, vm2, const0_rtx, v2, lo,
-                                      vlx2, tail));
-
-  emit_insn (gen_rtx_SET (vd, gen_lowpart (Vmode, vd_si)));
+bool
+rvv_const_vec_all_same_in_range_p (rtx x, HOST_WIDE_INT minval,
+                                   HOST_WIDE_INT maxval)
+{
+  rtx elt;
+  return (const_vec_duplicate_p (x, &elt) && CONST_INT_P (elt) &&
+          IN_RANGE (INTVAL (elt), minval, maxval));
 }
 
 /* Helper functions for handling sew=64 on RV32 system. */
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 8c78e726a19..fc27dc957dc 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1097,9 +1097,74 @@ riscv_const_insns (rtx x)
       }
 
     case CONST_DOUBLE:
-    case CONST_VECTOR:
       /* We can use x0 to load floating-point zero.  */
       return x == CONST0_RTX (GET_MODE (x)) ? 1 : 0;
+    case CONST_VECTOR:
+      {
+	machine_mode mode = GET_MODE (x);
+	/* For the mode which is not RVV mode, we use
+	   default configuration. */
+	if (!rvv_mode_p (mode))
+	  return x == CONST0_RTX (GET_MODE (x)) ? 1 : 0;
+	unsigned int factor = 0;
+	if (GET_MODE_CLASS (GET_MODE (x)) == MODE_VECTOR_BOOL)
+	  {
+	    /* In RVV, we can use vmclr.m/vmset.m to generate
+	       all 0s/1s bool vector. Otherwise we can only use
+	       load instructions. */
+	    if (x == CONST0_RTX (GET_MODE (x))
+	      || x == CONSTM1_RTX (GET_MODE (x)))
+	      return 1;
+	    else
+	      return 0;
+	  }
+	else if (FLOAT_MODE_P (GET_MODE (x)))
+	  {
+	    /* In RVV, Floating-point should be first load
+	       into floating-point register
+	       then duplicate. */
+	    factor = 3;
+	  }
+	else
+	  {
+	    rtx elt;
+	    if (!const_vec_duplicate_p (x, &elt))
+	      {
+	        rtx base, step;
+	        if (const_vec_series_p (x, &base, &step))
+	          {
+	            /* For const vector: {0, 1, 2, ......},
+	               we can use a single instruction vid.v
+	               to generate the vector. */
+	            if (INTVAL (step) == 1
+	              && INTVAL (base) == 0)
+	              factor = 1;
+	            /* We need a vid + li + vmul.vx instruction. */
+	            else if (INTVAL (base) == 0)
+	              factor = 2 + riscv_integer_cost (INTVAL (step));
+	            /* We need a vid + (li + vadd.vx)/vadd.vi instruction. */
+	            else if (INTVAL (step) == 1)
+	              factor = IN_RANGE (INTVAL (base), -16, 15) ? 2
+	                  : 2 + riscv_integer_cost (INTVAL (base));
+	            /* We need a vid + (li + vadd.vx)/vadd.vi + li + vmul.vx instruction. */
+	            else
+	              factor = IN_RANGE (INTVAL (base), -16, 15) ? 4
+	                  : 4 + riscv_integer_cost (INTVAL (base));
+	          }
+	        else
+	          factor = 0;
+	      }
+	    else
+	      {
+	        /* Use vmv.v.i. */
+	        if (rvv_const_vec_all_same_in_range_p (x, -15, 16))
+	          factor = 1;
+	        /* Use li + vmv.v.x. */
+	        else
+	          factor = 1 + riscv_integer_cost (INTVAL (elt));
+	      }
+	  }
+      }
 
     case CONST:
       /* See if we can refer to X directly.  */
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 9832d2adaa3..e01305ef3fc 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -140,6 +140,30 @@
   (VNx2HI "vnx2di") (VNx4HI "vnx2di") (VNx8HI "vnx2di") (VNx16HI "vnx2di") (VNx32HI "vnx2di") (VNx64HI "vnx2di")
   (VNx2SI "vnx2di") (VNx4SI "vnx2di") (VNx8SI "vnx2di") (VNx16SI "vnx2di") (VNx32SI "vnx2di")
   (VNx2DI "vnx2di") (VNx4DI "vnx4di") (VNx8DI "vnx8di") (VNx16DI "vnx16di")])
+
+;; Map a vector mode to SEW
+(define_mode_attr sew [
+  (VNx2QI "8") (VNx4QI "8") (VNx8QI "8") (VNx16QI "8")
+  (VNx32QI "8") (VNx64QI "8") (VNx128QI "8") (VNx2HI "16")
+  (VNx4HI "16") (VNx8HI "16") (VNx16HI "16") (VNx32HI "16")
+  (VNx64HI "16") (VNx2SI "32") (VNx4SI "32") (VNx8SI "32")
+  (VNx16SI "32") (VNx32SI "32") (VNx2DI "64") (VNx4DI "64")
+  (VNx8DI "64") (VNx16DI "64") 
+  (VNx2SF "32") (VNx4SF "32") (VNx8SF "32") (VNx16SF "32")
+  (VNx32SF "32") (VNx2DF "64") (VNx4DF "64") (VNx8DF "64")
+  (VNx16DF "64")])
+
+;; Map a vector mode to its LMUL.
+(define_mode_attr lmul [
+  (VNx2QI "1") (VNx4QI "1") (VNx8QI "1") (VNx16QI "1")
+  (VNx32QI "2") (VNx64QI "4") (VNx128QI "8") (VNx2HI "1")
+  (VNx4HI "1") (VNx8HI "1") (VNx16HI "2") (VNx32HI "4")
+  (VNx64HI "8") (VNx2SI "1") (VNx4SI "1") (VNx8SI "2")
+  (VNx16SI "4") (VNx32SI "8") (VNx2DI "1") (VNx4DI "2")
+  (VNx8DI "4") (VNx16DI "8")
+  (VNx2SF "1") (VNx4SF "1") (VNx8SF "2") (VNx16SF "4")
+  (VNx32SF "8") (VNx2DF "1") (VNx4DF "2") (VNx8DF "4")
+  (VNx16DF "8")])
   
 (define_int_iterator VMVOP [
   UNSPEC_VMV
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 4a9c6769812..1731d969372 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -62,6 +62,179 @@
       rvv_gen_policy ()));
   DONE;
 })
+
+;; =========================================================================
+;; == Vector spilling
+;; =========================================================================
+
+;; -------------------------------------------------------------------------
+;; ---- Moves Operations
+;; -------------------------------------------------------------------------
+;; Includes:
+;; - Full vector load/store/move
+;; - Partial vector load/store/move
+;; - All vector misalign move
+;; -------------------------------------------------------------------------
+
+;; Move Pattern for all vector modes.
+(define_expand "mov<mode>"
+  [(set (match_operand:VFULL 0 "reg_or_mem_operand")
+        (match_operand:VFULL 1 "vector_move_operand"))]
+  "TARGET_VECTOR"
+{
+  /* Need to force register if mem <- !reg.  */
+  if (MEM_P (operands[0]) && !REG_P (operands[1]))
+    operands[1] = force_reg (<MODE>mode, operands[1]);
+
+  if (GET_CODE (operands[1]) == CONST_VECTOR &&
+      rvv_expand_const_vector (operands[0], operands[1]))
+    DONE;
+})
+
+;; Full vector load/store/move.
+(define_insn "*mov<mode>"
+  [(set (match_operand:VFULL 0 "reg_or_mem_operand" "=vr,m,vr")
+        (match_operand:VFULL 1 "reg_or_mem_operand" "m,vr,vr"))]
+  "TARGET_VECTOR"
+  "@
+   vl<lmul>re<sew>.v\t%0,%1
+   vs<lmul>r.v\t%1,%0
+   vmv<lmul>r.v\t%0,%1"
+  [(set_attr "type" "vload,vstore,vcopy")
+   (set_attr "mode" "<MODE>")])
+
+(define_expand "mov<mode>"
+  [(parallel [(set (match_operand:VPARTIAL 0 "reg_or_mem_operand")
+                   (match_operand:VPARTIAL 1 "vector_move_operand"))
+              (clobber (scratch:SI))
+              (clobber (reg:SI VL_REGNUM))
+              (clobber (reg:SI VTYPE_REGNUM))])]
+  "TARGET_VECTOR"
+{
+  /* Need to force register if mem <- !reg.  */
+  if (MEM_P (operands[0]) && !REG_P (operands[1]))
+    operands[1] = force_reg (<MODE>mode, operands[1]);
+
+  if (GET_CODE (operands[1]) == CONST_VECTOR &&
+      rvv_expand_const_vector (operands[0], operands[1]))
+    DONE;
+})
+
+;; Partial vector load/store/move.
+(define_insn_and_split "*mov<mode>"
+  [(set (match_operand:VPARTIAL 0 "reg_or_mem_operand" "=vr,m,vr")
+        (match_operand:VPARTIAL 1 "reg_or_mem_operand" "m,vr,vr"))
+        (clobber (match_scratch:SI 2 "=&r,&r,X"))
+        (clobber (reg:SI VL_REGNUM))
+        (clobber (reg:SI VTYPE_REGNUM))]
+  "TARGET_VECTOR"
+  "@
+   vle<sew>.v\t%0,%1
+   vse<sew>.v\t%1,%0
+   #"
+  "&& (!reload_completed || (REG_P (operands[0])
+   && REG_P (operands[1])))"
+  [(const_int 0)]
+  {
+    /* Need to force register if mem <- !reg.  */
+    if (MEM_P (operands[0]) && !REG_P (operands[1]))
+      operands[1] = force_reg (<MODE>mode, operands[1]);
+      
+    if (MEM_P (operands[0]))
+      {
+        emit_insn (gen_vse (<MODE>mode, const0_rtx, XEXP (operands[0], 0),
+                            operands[1], gen_rtx_REG (Pmode, X0_REGNUM),
+                            rvv_gen_policy ()));
+        DONE;
+      }
+    if (MEM_P (operands[1]))
+      {
+        emit_insn (gen_vle (<MODE>mode, operands[0], const0_rtx, const0_rtx,
+                            XEXP (operands[1], 0), gen_rtx_REG (Pmode, X0_REGNUM),
+                            rvv_gen_policy ()));
+        DONE;
+      }
+
+    emit_insn (gen_rtx_SET (operands[0], operands[1]));
+    DONE;
+  }
+  [(set_attr "type" "vle,vse,vcopy")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*mov<mode>_reg"
+  [(set (match_operand:VPARTIAL 0 "register_operand" "=vr")
+        (match_operand:VPARTIAL 1 "register_operand" "vr"))]
+  "TARGET_VECTOR"
+  "vmv1r.v\t%0,%1"
+  [(set_attr "type" "vcopy")
+   (set_attr "mode" "<MODE>")])
+
+;; Move pattern for mask modes.
+(define_expand "mov<mode>"
+  [(parallel [(set (match_operand:VB 0 "reg_or_mem_operand")
+                   (match_operand:VB 1 "vector_move_operand"))
+	      (clobber (scratch:SI))
+        (clobber (reg:SI VL_REGNUM))
+        (clobber (reg:SI VTYPE_REGNUM))])]
+  "TARGET_VECTOR"
+{
+  /* Need to force register if mem <- !reg.  */
+  if (MEM_P (operands[0]) && !REG_P (operands[1]))
+    operands[1] = force_reg (<MODE>mode, operands[1]);
+  
+  if (GET_CODE (operands[1]) == CONST_VECTOR 
+    && rvv_expand_const_mask (operands[0], operands[1]))
+    DONE;
+})
+
+;; mask load/store/move.
+(define_insn_and_split "*mov<mode>"
+  [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr,m,vr")
+        (match_operand:VB 1 "reg_or_mem_operand" "m,vr,vr"))
+        (clobber (match_scratch:SI 2 "=&r,&r,X"))
+        (clobber (reg:SI VL_REGNUM))
+        (clobber (reg:SI VTYPE_REGNUM))]
+  "TARGET_VECTOR"
+  "@
+   vlm.v\t%0,%1
+   vsm.v\t%1,%0
+   #"
+  "&& (!reload_completed || (REG_P (operands[0])
+   && REG_P (operands[1])))"
+  [(const_int 0)]
+  {
+    /* Need to force register if mem <- !reg.  */
+    if (MEM_P (operands[0]) && !REG_P (operands[1]))
+      operands[1] = force_reg (<MODE>mode, operands[1]);
+      
+    if (MEM_P (operands[0]))
+      {
+        emit_insn (gen_vsm (<MODE>mode, XEXP (operands[0], 0), operands[1],
+                            gen_rtx_REG (Pmode, X0_REGNUM),
+                            rvv_gen_policy ()));
+        DONE;
+      }
+    if (MEM_P (operands[1]))
+      {
+        emit_insn (gen_vlm (<MODE>mode, operands[0], XEXP (operands[1], 0),
+                            gen_rtx_REG (Pmode, X0_REGNUM),
+                            rvv_gen_policy ()));
+        DONE;
+      }
+
+    emit_insn (gen_rtx_SET (operands[0], operands[1]));
+    DONE;
+  }
+  [(set_attr "type" "vle,vse,vcopy")
+   (set_attr "mode" "<MODE>")])
+
+(define_insn "*mov<mode>_reg"
+  [(set (match_operand:VB 0 "register_operand" "=vr")
+        (match_operand:VB 1 "register_operand" "vr"))]
+  "TARGET_VECTOR"
+  "vmv1r.v\t%0,%1"
+  [(set_attr "type" "vcopy")
+   (set_attr "mode" "<MODE>")])   
    
 ;; ===============================================================================
 ;; == Intrinsics
@@ -370,4 +543,32 @@
   vmerge.vxm\t%0,%3,%4,%1
   vmerge.vim\t%0,%3,%4,%1"
  [(set_attr "type" "vmerge")
+  (set_attr "mode" "<MODE>")])
+
+;; vmclr.m vd -> vmxor.mm vd,vd,vd # Clear mask register
+(define_insn "@vmclr<mode>_m"
+  [(set (match_operand:VB 0 "register_operand"        "=vr")
+    (unspec:VB
+      [(vec_duplicate:VB (const_int 0))
+       (match_operand 1 "p_reg_or_const_csr_operand"  "rK")
+       (match_operand 2 "const_int_operand")
+       (reg:SI VL_REGNUM)
+       (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+ "TARGET_VECTOR"
+ "vmclr.m\t%0"
+ [(set_attr "type" "vmask")
+  (set_attr "mode" "<MODE>")])
+
+;; vmset.m vd -> vmxnor.mm vd,vd,vd # Set mask register
+(define_insn "@vmset<mode>_m"
+  [(set (match_operand:VB 0 "register_operand"        "=vr")
+    (unspec:VB
+      [(vec_duplicate:VB (const_int 1))
+       (match_operand 1 "p_reg_or_const_csr_operand"  "rK")
+       (match_operand 2 "const_int_operand")
+       (reg:SI VL_REGNUM)
+       (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))]
+ "TARGET_VECTOR"
+ "vmset.m\t%0"
+ [(set_attr "type" "vmask")
   (set_attr "mode" "<MODE>")])
\ No newline at end of file
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 08/21]   Add poly manipulation
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (6 preceding siblings ...)
  2022-05-31  8:49 ` [PATCH 07/21] Add register spilling support juzhe.zhong
@ 2022-05-31  8:49 ` juzhe.zhong
  2022-05-31  8:50 ` [PATCH 09/21] Add misc function intrinsic support juzhe.zhong
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:49 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-protos.h (riscv_add_offset): Change riscv_add_offset as global function.
        * config/riscv/riscv-vector.cc (rvv_report_required): New function.
        (expand_quotient): New function.
        (rvv_expand_poly_move): New function.
        * config/riscv/riscv-vector.h (rvv_report_required): New function.
        (rvv_expand_poly_move): New function.
        * config/riscv/riscv.cc (riscv_const_insns): Fix no return value bug.
        (riscv_split_symbol): Add symbol_ref with poly_int support.
        (riscv_legitimize_const_move): Add const poly_int move support.
        (riscv_legitimize_move): Add const poly_int move support.
        (riscv_hard_regno_mode_ok): Add VL_REGNUM and VTYPE_REGNUM register allocation.
        (riscv_conditional_register_usage): Fix RVV registers.
        
---
 gcc/config/riscv/riscv-protos.h  |   1 +
 gcc/config/riscv/riscv-vector.cc | 254 +++++++++++++++++++++++++++++++
 gcc/config/riscv/riscv-vector.h  |   2 +
 gcc/config/riscv/riscv.cc        |  46 +++++-
 4 files changed, 299 insertions(+), 4 deletions(-)

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 618eb746eaa..2d63fe76930 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -74,6 +74,7 @@ extern bool riscv_expand_block_move (rtx, rtx, rtx);
 extern bool riscv_store_data_bypass_p (rtx_insn *, rtx_insn *);
 extern rtx riscv_gen_gpr_save_insn (struct riscv_frame_info *);
 extern bool riscv_gpr_save_operation_p (rtx);
+extern rtx riscv_add_offset (rtx, rtx, HOST_WIDE_INT);
 
 /* Routines implemented in riscv-c.cc.  */
 void riscv_cpu_cpp_builtins (cpp_reader *);
diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index 4b2fe2a8d11..d09fc1b8e49 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -592,6 +592,260 @@ rvv_const_vec_all_same_in_range_p (rtx x, HOST_WIDE_INT minval,
           IN_RANGE (INTVAL (elt), minval, maxval));
 }
 
+/* Report when we try to do something that requires vector when vector is disabled.
+   This is an error of last resort and isn't very high-quality.  It usually
+   involves attempts to measure the vector length in some way.  */
+void
+rvv_report_required (void)
+{
+  static bool reported_p = false;
+
+  /* Avoid reporting a slew of messages for a single oversight.  */
+  if (reported_p)
+    return;
+
+  error ("this operation requires the RVV ISA extension");
+  inform (input_location, "you can enable RVV using the command-line"
+          " option %<-march%>, or by using the %<target%>"
+          " attribute or pragma");
+  reported_p = true;
+}
+
+/* Note: clobber register holds the vlenb or 1/2 vlenb or 1/4 vlenb or 1/8 vlenb value. */
+/* Expand move for quotient.  */
+static void
+expand_quotient (int quotient, machine_mode mode, rtx clobber_vlenb, rtx dest)
+{
+  if (quotient == 0)
+    {
+      riscv_emit_move(dest, GEN_INT(0));
+      return;
+    }
+
+  bool is_neg = quotient < 0;
+  quotient = abs(quotient);
+  int log2 = exact_log2 (quotient);
+  int vlenb = BYTES_PER_RISCV_VECTOR.coeffs[1];
+
+  if (GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+    emit_insn (gen_rtx_SET (clobber_vlenb, gen_int_mode (poly_int64 (vlenb, vlenb), mode)));
+  else
+    {
+      riscv_emit_move (gen_highpart (Pmode, clobber_vlenb), GEN_INT (0));
+      emit_insn (gen_rtx_SET (gen_lowpart (Pmode, clobber_vlenb), gen_int_mode (poly_int64 (vlenb, vlenb), Pmode)));
+    }
+
+  if (log2 == 0)
+    {
+      if (is_neg)
+        {
+          if (GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+            emit_insn (gen_rtx_SET (dest, gen_rtx_NEG (mode, clobber_vlenb)));
+          else
+            {
+              /* We should use SImode to simulate DImode negation. */
+              /* prologue and epilogue can not go through this condition. */
+              gcc_assert (can_create_pseudo_p ());
+              rtx reg = gen_reg_rtx (Pmode);
+              riscv_emit_move(dest, clobber_vlenb);
+              emit_insn (gen_rtx_SET (reg,
+                  gen_rtx_NE (Pmode, gen_lowpart (Pmode, dest), const0_rtx)));
+              emit_insn (gen_rtx_SET (gen_highpart (Pmode, dest),
+                  gen_rtx_NEG (Pmode, gen_highpart (Pmode, dest))));
+              emit_insn (gen_rtx_SET (gen_lowpart (Pmode, dest),
+                  gen_rtx_NEG (Pmode, gen_lowpart (Pmode, dest))));
+              emit_insn (gen_rtx_SET (gen_highpart (Pmode, dest),
+                  gen_rtx_MINUS (Pmode, gen_highpart (Pmode, dest), reg)));
+            }
+        }
+      else
+        riscv_emit_move(dest, clobber_vlenb);
+    }
+  else if (log2 != -1
+    && GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+    {
+      gcc_assert (IN_RANGE (log2, 0, 31));
+
+      if (is_neg)
+        {
+          emit_insn (gen_rtx_SET (dest, gen_rtx_NEG (mode, clobber_vlenb)));
+          emit_insn (gen_rtx_SET (dest, gen_rtx_ASHIFT (mode, dest, GEN_INT (log2))));
+        }
+      else
+        emit_insn (gen_rtx_SET (dest, gen_rtx_ASHIFT (mode, clobber_vlenb, GEN_INT (log2))));
+    }
+  else if (exact_log2 (quotient + 1) != -1
+    && GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+    {
+      gcc_assert (IN_RANGE (exact_log2 (quotient + 1), 0, 31));
+      emit_insn (gen_rtx_SET (
+                   dest,
+                   gen_rtx_ASHIFT (mode, clobber_vlenb, GEN_INT (exact_log2 (quotient + 1)))));
+
+      if (is_neg)
+        emit_insn (gen_rtx_SET (dest, gen_rtx_MINUS (mode, clobber_vlenb, dest)));
+      else
+        emit_insn (gen_rtx_SET (dest, gen_rtx_MINUS (mode, dest, clobber_vlenb)));
+    }
+  else if (exact_log2 (quotient - 1) != -1
+    && GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+    {
+      gcc_assert (IN_RANGE (exact_log2 (quotient - 1), 0, 31));
+      emit_insn (gen_rtx_SET (
+                   dest, gen_rtx_ASHIFT (mode, clobber_vlenb,
+                                         GEN_INT (exact_log2 (quotient - 1)))));
+
+      if (is_neg)
+        {
+          emit_insn (gen_rtx_SET (dest, gen_rtx_NEG (mode, dest)));
+          emit_insn (gen_rtx_SET (dest, gen_rtx_MINUS (mode, dest, clobber_vlenb)));
+        }
+      else
+        emit_insn (gen_rtx_SET (dest, gen_rtx_PLUS (mode, dest, clobber_vlenb)));
+    }
+  else
+    {
+      gcc_assert (TARGET_MUL
+                  && "M-extension must be enabled to calculate the poly_int "
+                  "size/offset.");
+
+      if (is_neg)
+        riscv_emit_move (dest, GEN_INT (-quotient));
+      else
+        riscv_emit_move (dest, GEN_INT (quotient));
+
+      if (GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+        emit_insn (gen_rtx_SET (dest, gen_rtx_MULT (mode, dest, clobber_vlenb)));
+      else
+        {
+          /* We should use SImode to simulate DImode multiplication. */
+          /* prologue and epilogue can not go through this condition. */
+          gcc_assert (can_create_pseudo_p ());
+          rtx reg = gen_reg_rtx (Pmode);
+          emit_insn (gen_umulsi3_highpart (reg, gen_lowpart (Pmode, dest),
+              gen_lowpart (Pmode, clobber_vlenb)));
+          emit_insn (gen_rtx_SET (gen_highpart (Pmode, clobber_vlenb),
+              gen_rtx_MULT (Pmode, gen_highpart (Pmode, clobber_vlenb),
+                  gen_lowpart (Pmode, dest))));
+          emit_insn (gen_rtx_SET (gen_highpart (Pmode, dest),
+              gen_rtx_MULT (Pmode, gen_highpart (Pmode, dest),
+                  gen_lowpart (Pmode, clobber_vlenb))));
+          emit_insn (gen_rtx_SET (gen_lowpart (Pmode, dest),
+              gen_rtx_MULT (Pmode, gen_lowpart (Pmode, dest),
+                  gen_lowpart (Pmode, clobber_vlenb))));
+          emit_insn (gen_rtx_SET (gen_highpart (Pmode, dest),
+              gen_rtx_PLUS (Pmode, gen_highpart (Pmode, dest),
+                  gen_highpart (Pmode, clobber_vlenb))));
+          emit_insn (gen_rtx_SET (gen_highpart (Pmode, dest),
+              gen_rtx_PLUS (Pmode, gen_highpart (Pmode, dest), reg)));
+        }
+    }
+}
+
+/* Analyze src and emit const_poly_int mov sequence.  */
+
+void
+rvv_expand_poly_move (machine_mode mode, rtx dest, rtx clobber, rtx src)
+{
+  poly_int64 value = rtx_to_poly_int64 (src);
+  int offset = value.coeffs[0];
+  int factor = value.coeffs[1];
+  int vlenb = BYTES_PER_RISCV_VECTOR.coeffs[1];
+  int div_factor = 0;
+
+  if (BYTES_PER_RISCV_VECTOR.is_constant ())
+    {
+      gcc_assert (value.is_constant ());
+      riscv_emit_move (dest, GEN_INT (value.to_constant ()));
+      return;
+    }
+  else if ((factor % vlenb) == 0)
+    expand_quotient (factor / vlenb, mode, clobber, dest);
+  else if ((factor % (vlenb / 2)) == 0)
+    {
+      expand_quotient (factor / (vlenb / 2), mode, clobber, dest);
+      div_factor = 2;
+    }
+  else if ((factor % (vlenb / 4)) == 0)
+    {
+      expand_quotient (factor / (vlenb / 4), mode, clobber, dest);
+      div_factor = 4;
+    }
+  else if ((factor % (vlenb / 8)) == 0)
+    {
+      expand_quotient (factor / (vlenb / 8), mode, clobber, dest);
+      div_factor = 8;
+    }
+  else if ((factor % (vlenb / 16)) == 0)
+    {
+      expand_quotient (factor / (vlenb / 16), mode, clobber, dest);
+      div_factor = 16;
+    }
+  else
+    gcc_unreachable ();
+
+  if (div_factor != 0)
+    {
+      if (GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+        emit_insn (gen_rtx_SET (
+            dest,
+            gen_rtx_ASHIFTRT (mode, dest, GEN_INT (exact_log2 (div_factor)))));
+      else
+        {
+          /* We should use SImode to simulate DImode shift. */
+          /* prologue and epilogue can not go through this condition. */
+          gcc_assert (can_create_pseudo_p ());
+          rtx reg = gen_reg_rtx (Pmode);
+          emit_insn (gen_rtx_SET (
+              reg, gen_rtx_ASHIFT (Pmode, gen_highpart (Pmode, dest),
+                                   GEN_INT (GET_MODE_BITSIZE (Pmode) -
+                                            exact_log2 (div_factor)))));
+          emit_insn (gen_rtx_SET (
+              gen_lowpart (Pmode, dest),
+              gen_rtx_LSHIFTRT (Pmode, gen_lowpart (Pmode, dest),
+                                GEN_INT (exact_log2 (div_factor)))));
+          emit_insn (gen_rtx_SET (
+              gen_lowpart (Pmode, dest),
+              gen_rtx_IOR (Pmode, reg, gen_lowpart (Pmode, dest))));
+          emit_insn (gen_rtx_SET (
+              gen_highpart (Pmode, dest),
+              gen_rtx_ASHIFTRT (Pmode, gen_highpart (Pmode, dest),
+                                GEN_INT (exact_log2 (div_factor)))));
+        }
+    }
+
+  HOST_WIDE_INT constant = offset - factor;
+
+  if (constant == 0)
+    return;
+  else if (SMALL_OPERAND (constant))
+    {
+      if (GET_MODE_SIZE (mode).to_constant () <= GET_MODE_SIZE (Pmode))
+        emit_insn (
+            gen_rtx_SET (dest, gen_rtx_PLUS (mode, dest, GEN_INT (constant))));
+      else
+        {
+          /* We should use SImode to simulate DImode addition. */
+          /* prologue and epilogue can not go through this condition. */
+          gcc_assert (can_create_pseudo_p ());
+          rtx reg = gen_reg_rtx (Pmode);
+          emit_insn (
+              gen_rtx_SET (reg, gen_rtx_PLUS (Pmode, gen_lowpart (Pmode, dest),
+                                              GEN_INT (constant))));
+          emit_insn (gen_rtx_SET (
+              gen_lowpart (Pmode, dest),
+              gen_rtx_LTU (Pmode, reg, gen_lowpart (Pmode, dest))));
+          emit_insn (
+              gen_rtx_SET (gen_highpart (Pmode, dest),
+                           gen_rtx_PLUS (Pmode, gen_lowpart (Pmode, dest),
+                                         gen_highpart (Pmode, dest))));
+          riscv_emit_move (gen_lowpart (Pmode, dest), reg);
+        }
+    }
+  else
+    emit_insn (gen_rtx_SET (dest, riscv_add_offset (clobber, dest, constant)));
+}
+
 /* Helper functions for handling sew=64 on RV32 system. */
 bool
 imm32_p (rtx a)
diff --git a/gcc/config/riscv/riscv-vector.h b/gcc/config/riscv/riscv-vector.h
index e93852e3e56..b70cf676e26 100644
--- a/gcc/config/riscv/riscv-vector.h
+++ b/gcc/config/riscv/riscv-vector.h
@@ -20,4 +20,6 @@
 
 #ifndef GCC_RISCV_VECTOR_H
 #define GCC_RISCV_VECTOR_H
+void rvv_report_required (void);
+void rvv_expand_poly_move (machine_mode, rtx, rtx, rtx);
 #endif // GCC_RISCV_VECTOR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index fc27dc957dc..7a1f19b32ee 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -1164,6 +1164,7 @@ riscv_const_insns (rtx x)
 	          factor = 1 + riscv_integer_cost (INTVAL (elt));
 	      }
 	  }
+	return factor;
       }
 
     case CONST:
@@ -1467,7 +1468,7 @@ riscv_split_symbol (rtx temp, rtx addr, machine_mode mode, rtx *low_out,
    riscv_force_temporary; it is only needed when OFFSET is not a
    SMALL_OPERAND.  */
 
-static rtx
+rtx
 riscv_add_offset (rtx temp, rtx reg, HOST_WIDE_INT offset)
 {
   if (!SMALL_OPERAND (offset))
@@ -1723,7 +1724,20 @@ riscv_legitimize_const_move (machine_mode mode, rtx dest, rtx src)
       riscv_emit_move (dest, riscv_add_offset (NULL, base, INTVAL (offset)));
       return;
     }
-
+    
+  /* Handle (const:SI (plus:SI (symbol_ref:SI)
+        (const_poly_int:SI [16, 16]))) */
+  if (GET_CODE (src) == CONST && GET_CODE (XEXP (src, 0)) == PLUS 
+      && CONST_POLY_INT_P (XEXP (XEXP (src, 0), 1)))
+    {
+      rtx reg = gen_reg_rtx (mode);
+      rtx clobber = gen_reg_rtx (mode);
+      riscv_emit_move (dest, XEXP (XEXP (src, 0), 0));
+      rvv_expand_poly_move (mode, reg, clobber, XEXP (XEXP (src, 0), 1));
+      emit_insn (gen_rtx_SET (dest, gen_rtx_PLUS (mode, dest, reg)));
+      return;
+    }
+    
   src = force_const_mem (mode, src);
 
   /* When using explicit relocs, constant pool references are sometimes
@@ -1738,6 +1752,28 @@ riscv_legitimize_const_move (machine_mode mode, rtx dest, rtx src)
 bool
 riscv_legitimize_move (machine_mode mode, rtx dest, rtx src)
 {
+  scalar_int_mode int_mode;
+  if (CONST_POLY_INT_P (src)
+      && is_a <scalar_int_mode> (mode, &int_mode))
+    {
+      poly_int64 value = rtx_to_poly_int64 (src);
+      if (!value.is_constant () && !TARGET_VECTOR)
+	{
+	  rvv_report_required ();
+	  return true;
+	}
+      if (GET_MODE_SIZE (mode).to_constant () < GET_MODE_SIZE (Pmode))
+	{
+	  rtx clobber = gen_reg_rtx (Pmode);
+	  rvv_expand_poly_move (Pmode, gen_lowpart (Pmode, dest), clobber, src);
+	}
+      else
+	{
+	  rtx clobber = gen_reg_rtx (mode);
+	  rvv_expand_poly_move (mode, dest, clobber, src);
+	}
+      return true;
+    }
   /* Expand 
        (set (reg:QI target) (mem:QI (address))) 
      to
@@ -4998,7 +5034,9 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
 
       if (regsize != 1)
 	return ((regno % regsize) == 0);		
-    }			
+    }
+	else if (regno == VTYPE_REGNUM || regno == VL_REGNUM)
+	  return (nregs == 1 && mode == SImode);
   else
     return false;
 
@@ -5388,7 +5426,7 @@ riscv_conditional_register_usage (void)
   if (!TARGET_VECTOR)
     {
       for (int regno = V_REG_FIRST; regno <= V_REG_LAST; regno++)
-	call_used_regs[regno] = 1;
+	fixed_regs[regno] = call_used_regs[regno] = 1;
 	
       fixed_regs[VTYPE_REGNUM] = call_used_regs[VTYPE_REGNUM] = 1;
       fixed_regs[VL_REGNUM] = call_used_regs[VL_REGNUM] = 1;
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 09/21]   Add misc function intrinsic support
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (7 preceding siblings ...)
  2022-05-31  8:49 ` [PATCH 08/21] Add poly manipulation juzhe.zhong
@ 2022-05-31  8:50 ` juzhe.zhong
  2022-05-31  8:50 ` [PATCH 11/21] Add calling function support juzhe.zhong
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:50 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/predicates.md (vector_any_register_operand): New predicate.
        * config/riscv/riscv-protos.h (riscv_regmode_natural_size): New function.
        * config/riscv/riscv-vector-builtins-functions.cc (get_pred_str): New function.
        (get_operation_str): New function.
        (is_dt_ptr): New function.
        (is_dt_unsigned): New function.
        (is_dt_const): New function.
        (intrinsic_rename): New function.
        (get_dt_t_with_index): New function.
        (misc::assemble_name): New function.
        (misc::get_return_type): New function.
        (misc::get_argument_types): New function.
        (vreinterpret::expand): New function.
        (vlmul_ext::assemble_name): New function.
        (vlmul_ext::expand): New function.
        (vlmul_trunc::assemble_name): New function.
        (vlmul_trunc::expand): New function.
        (vundefined::assemble_name): New function.
        (vundefined::get_argument_types): New function.
        (vundefined::expand): New function.
        * config/riscv/riscv-vector-builtins-functions.def (vreinterpret): New macro definition.
        (vlmul_ext): New macro definition.
        (vlmul_trunc): New macro definition.
        (vundefined): New macro definition.
        * config/riscv/riscv-vector-builtins-functions.h (class misc): New class.
        (class vreinterpret): New class.
        (class vlmul_ext): New class.
        (class vlmul_trunc): New class.
        (class vundefined): New class.
        * config/riscv/riscv-vector-builtins-iterators.def (VCONVERFI): New iterator.
        (VCONVERI): New iterator.
        (VCONVERI2): New iterator.
        (VCONVERI3): New iterator.
        (VCONVERF): New iterator.
        (VSETI): New iterator.
        (VSETF): New iterator.
        (VGETI): New iterator.
        (VGETF): New iterator.
        (VLMULEXT): New iterator.
        (VLMULTRUNC): New iterator.
        * config/riscv/riscv.cc (riscv_hard_regno_mode_ok): Fix register allocation.
        (riscv_class_max_nregs): Fix register allocation.
        (riscv_can_change_mode_class): Add RVV mode can change support.
        (riscv_regmode_natural_size): New function.
        * config/riscv/riscv.h (REGMODE_NATURAL_SIZE): New targethook.
        * config/riscv/vector-iterators.md: New iterators and attributes.
        * config/riscv/vector.md (@vreinterpret<mode>): New pattern.
        (@vlmul_ext<mode>): New pattern.
        (*vlmul_ext<VLMULEXT:mode><V:mode>): New pattern.
        (@vlmul_trunc<mode>): New pattern.
        (@vset<mode>): New pattern.
        (@vget<mode>): New pattern.

gcc/testsuite/ChangeLog:

        * g++.target/riscv/rvv/misc_func.C: New test.
        * g++.target/riscv/rvv/rvv-intrinsic.exp: New test.
        * gcc.target/riscv/rvv/intrinsic/misc_func.c: New test.
        
---
 gcc/config/riscv/predicates.md                |    7 +
 gcc/config/riscv/riscv-protos.h               |    1 +
 .../riscv/riscv-vector-builtins-functions.cc  |  288 ++
 .../riscv/riscv-vector-builtins-functions.def |   22 +
 .../riscv/riscv-vector-builtins-functions.h   |   62 +
 .../riscv/riscv-vector-builtins-iterators.def |  171 +
 gcc/config/riscv/riscv.cc                     |   33 +-
 gcc/config/riscv/riscv.h                      |    2 +
 gcc/config/riscv/vector-iterators.md          |  142 +
 gcc/config/riscv/vector.md                    |  133 +
 .../g++.target/riscv/rvv/misc_func.C          | 2597 +++++++++++++++
 .../g++.target/riscv/rvv/rvv-intrinsic.exp    |   39 +
 .../riscv/rvv/intrinsic/misc_func.c           | 2921 +++++++++++++++++
 13 files changed, 6414 insertions(+), 4 deletions(-)
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/misc_func.C
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/rvv-intrinsic.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/misc_func.c

diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 7a101676538..e31c829bf5b 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -257,6 +257,13 @@
   return GET_MODE (op) == Pmode;
 })
 
+;; A special predicate that doesn't match a particular mode.
+(define_special_predicate "vector_any_register_operand"
+  (match_code "reg, subreg")
+{
+  return VECTOR_MODE_P (GET_MODE (op));
+})
+
 (define_predicate "vector_reg_or_const0_operand"
   (ior (match_operand 0 "register_operand")
        (match_test "op == const0_rtx && !VECTOR_MODE_P (GET_MODE (op))")))
diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index 2d63fe76930..fd8906e47de 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -75,6 +75,7 @@ extern bool riscv_store_data_bypass_p (rtx_insn *, rtx_insn *);
 extern rtx riscv_gen_gpr_save_insn (struct riscv_frame_info *);
 extern bool riscv_gpr_save_operation_p (rtx);
 extern rtx riscv_add_offset (rtx, rtx, HOST_WIDE_INT);
+extern poly_uint64 riscv_regmode_natural_size (machine_mode);
 
 /* Routines implemented in riscv-c.cc.  */
 void riscv_cpu_cpp_builtins (cpp_reader *);
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc
index 0acda8f671e..a25f167f40e 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc
@@ -143,6 +143,131 @@ get_vtype_for_mode (machine_mode mode)
   gcc_unreachable ();
 }
 
+static const char *
+get_pred_str (enum predication_index pred, bool overloaded_p = false)
+{
+  switch (pred)
+    {
+    case PRED_void:
+    case PRED_none:
+      return "";
+
+    case PRED_m:
+      return overloaded_p ? "" : "_m";
+
+    case PRED_tam:
+      return "_tam";
+
+    case PRED_tum:
+      return "_tum";
+
+    case PRED_tu:
+      return "_tu";
+
+    case PRED_ta:
+      return "_ta";
+
+    case PRED_ma:
+      return "_ma";
+
+    case PRED_mu:
+      return "_mu";
+
+    case PRED_tama:
+      return "_tama";
+
+    case PRED_tamu:
+      return "_tamu";
+
+    case PRED_tuma:
+      return "_tuma";
+
+    case PRED_tumu:
+      return "_tumu";
+
+    default:
+      gcc_unreachable ();
+    }
+}
+
+static const char *
+get_operation_str (enum operation_index op)
+{
+  switch (op)
+    {
+    case OP_vv:
+      return "_vv";
+
+    case OP_vx:
+      return "_vx";
+
+    case OP_v:
+      return "_v";
+
+    case OP_wv:
+      return "_wv";
+
+    case OP_wx:
+      return "_wx";
+
+    case OP_x_x_v:
+      return "_x_x_v";
+
+    case OP_vf2:
+      return "_vf2";
+
+    case OP_vf4:
+      return "_vf4";
+
+    case OP_vf8:
+      return "_vf8";
+
+    case OP_vvm:
+      return "_vvm";
+
+    case OP_vxm:
+      return "_vxm";
+
+    case OP_x_x_w:
+      return "_x_x_w";
+
+    case OP_v_v:
+      return "_v_v";
+
+    case OP_v_x:
+      return "_v_x";
+
+    case OP_v_f:
+      return "_v_f";
+
+    case OP_vs:
+      return "_vs";
+
+    case OP_vf:
+      return "_vf";
+
+    case OP_wf:
+      return "_wf";
+
+    case OP_vfm:
+      return "_vfm";
+
+    case OP_vm:
+      return "_vm";
+
+    case OP_mm:
+      return "_mm";
+
+    case OP_m:
+      return "_m";
+
+    default:
+      break;
+    }
+
+  return "";
+}
+
 static const char *
 mode2data_type_str (machine_mode mode, bool u, bool ie)
 {
@@ -300,6 +425,53 @@ get_dt_t (machine_mode mode, bool u, bool ptr = false, bool c = false)
   return type;
 }
 
+/* Helper functions to get datatype of arg. */
+
+static bool
+is_dt_ptr (enum data_type_index dt)
+{
+  return dt == DT_ptr || dt == DT_uptr || dt == DT_c_ptr || dt == DT_c_uptr;
+}
+
+static bool
+is_dt_unsigned (enum data_type_index dt)
+{
+  return dt == DT_unsigned || dt == DT_uptr || dt == DT_c_uptr;
+}
+
+static bool
+is_dt_const (enum data_type_index dt)
+{
+  return dt == DT_c_ptr || dt == DT_c_uptr;
+}
+
+/* Helper functions for builder implementation. */
+static void
+intrinsic_rename (function_instance &instance, int index1, int index2)
+{
+  machine_mode dst_mode = instance.get_arg_pattern ().arg_list[index1];
+  machine_mode src_mode = instance.get_arg_pattern ().arg_list[index2];
+  bool dst_unsigned_p = instance.get_data_type_list ()[index1] == DT_unsigned;
+  bool src_unsigned_p = instance.get_data_type_list ()[index2] == DT_unsigned;
+  const char *name = instance.get_base_name ();
+  const char *op = get_operation_str (instance.get_operation ());
+  const char *src_suffix = mode2data_type_str (src_mode, src_unsigned_p, false);
+  const char *dst_suffix = mode2data_type_str (dst_mode, dst_unsigned_p, false);
+  const char *pred = get_pred_str (instance.get_pred ());
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s%s%s%s", name, op, src_suffix, dst_suffix, pred);
+}
+
+static tree
+get_dt_t_with_index (const function_instance &instance, int index)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[index];
+  bool unsigned_p = is_dt_unsigned (instance.get_data_type_list ()[index]);
+  bool ptr_p = is_dt_ptr (instance.get_data_type_list ()[index]);
+  bool c_p = is_dt_const (instance.get_data_type_list ()[index]);
+  return get_dt_t (mode, unsigned_p, ptr_p, c_p);
+}
+
+
 /* Return true if the function has no return value.  */
 static bool
 function_returns_void_p (tree fndecl)
@@ -1222,6 +1394,122 @@ vsetvlmax::expand (const function_instance &instance, tree exp, rtx target) cons
                                 !function_returns_void_p (fndecl));
 }
 
+/* A function implementation for Miscellaneous functions.  */
+char *
+misc::assemble_name (function_instance &instance)
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  append_name (mode2data_type_str (mode, unsigned_p, false));
+  return finish_name ();
+}
+
+tree
+misc::get_return_type (const function_instance &instance) const
+{
+  return get_dt_t_with_index (instance, 0);
+}
+
+void
+misc::get_argument_types (const function_instance &instance,
+                          vec<tree> &argument_types) const
+{
+  argument_types.quick_push (get_dt_t_with_index (instance, 1));
+}
+
+/* A function implementation for vreinterpret functions.  */
+rtx
+vreinterpret::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  enum insn_code icode = code_for_vreinterpret (instance.get_arg_pattern ().arg_list[0]);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vlmul_ext functions.  */
+char *
+vlmul_ext::assemble_name (function_instance &instance)
+{
+  machine_mode tmode = instance.get_arg_pattern ().arg_list[0];
+  machine_mode smode = instance.get_arg_pattern ().arg_list[1];
+  if (GET_MODE_INNER (tmode) != GET_MODE_INNER (smode))
+    return nullptr;
+
+  if (tmode == smode)
+    return nullptr;
+  
+  if (known_lt (GET_MODE_SIZE (tmode), GET_MODE_SIZE (smode)))
+    return nullptr;
+
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  append_name (mode2data_type_str (tmode, unsigned_p, false));
+  return finish_name ();
+}
+
+rtx
+vlmul_ext::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  enum insn_code icode = code_for_vlmul_ext (instance.get_arg_pattern ().arg_list[0]);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vlmul_trunc functions.  */
+char *
+vlmul_trunc::assemble_name (function_instance &instance)
+{
+  machine_mode tmode = instance.get_arg_pattern ().arg_list[0];
+  machine_mode smode = instance.get_arg_pattern ().arg_list[1];
+  if (GET_MODE_INNER (tmode) != GET_MODE_INNER (smode))
+    return nullptr;
+
+  if (tmode == smode)
+    return nullptr;
+  
+  if (known_gt (GET_MODE_SIZE (tmode), GET_MODE_SIZE (smode)))
+    return nullptr;
+
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  append_name (mode2data_type_str (tmode, unsigned_p, false));
+  return finish_name ();
+}
+
+rtx
+vlmul_trunc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  enum insn_code icode = code_for_vlmul_trunc (instance.get_arg_pattern ().arg_list[0]);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vundefined functions.  */
+char *
+vundefined::assemble_name (function_instance &instance)
+{
+  const char *name = instance.get_base_name ();
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  const char *dt = mode2data_type_str (mode, unsigned_p, false);
+  snprintf (instance.function_name, NAME_MAXLEN, "%s%s", name, dt);
+  return nullptr;
+}
+
+void
+vundefined::get_argument_types (const function_instance &,
+                          vec<tree> &) const
+{
+}
+
+rtx
+vundefined::expand (const function_instance &, tree, rtx target) const
+{
+  emit_clobber (copy_rtx (target));
+  return target;
+}
+
 } // end namespace riscv_vector
 
 using namespace riscv_vector;
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 666e8503d81..86130e02381 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -34,6 +34,28 @@ along with GCC; see the file COPYING3. If not see
 /* 6. Configuration-Setting Instructions. */
 DEF_RVV_FUNCTION(vsetvl, vsetvl, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
 DEF_RVV_FUNCTION(vsetvlmax, vsetvlmax, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
+/* Helper misc intrinsics for software programmer. */
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VI, unsigned), VATTR(0, VI, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VI, signed), VATTR(0, VI, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VATTR(1, VCONVERFI, signed), VITER(VCONVERF, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VATTR(1, VCONVERFI, unsigned), VITER(VCONVERF, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERF, signed), VATTR(0, VCONVERFI, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERF, signed), VATTR(0, VCONVERFI, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERI, signed), VATTR(0, VCONVERI, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERI, unsigned), VATTR(0, VCONVERI, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERI2, signed), VATTR(0, VCONVERI2, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERI2, unsigned), VATTR(0, VCONVERI2, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERI3, signed), VATTR(0, VCONVERI3, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vreinterpret, vreinterpret, (2, VITER(VCONVERI3, unsigned), VATTR(0, VCONVERI3, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vlmul_ext, vlmul_ext, (2, VITER(VLMULEXT, signed), VITER(VI, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vlmul_ext, vlmul_ext, (2, VITER(VLMULEXT, unsigned), VITER(VI, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vlmul_ext, vlmul_ext, (2, VITER(VLMULEXT, signed), VITER(VF, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vlmul_trunc, vlmul_trunc, (2, VITER(VLMULTRUNC, signed), VITER(VI, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vlmul_trunc, vlmul_trunc, (2, VITER(VLMULTRUNC, unsigned), VITER(VI, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vlmul_trunc, vlmul_trunc, (2, VITER(VLMULTRUNC, signed), VITER(VF, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vundefined, vundefined, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
+DEF_RVV_FUNCTION(vundefined, vundefined, (1, VITER(VI, unsigned)), PAT_none, PRED_none, OP_none)
+DEF_RVV_FUNCTION(vundefined, vundefined, (1, VITER(VF, signed)), PAT_none, PRED_none, OP_none)
 #undef REQUIRED_EXTENSIONS
 #undef DEF_RVV_FUNCTION
 #undef VITER
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.h b/gcc/config/riscv/riscv-vector-builtins-functions.h
index 9846ded1155..85e4af5f079 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.h
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.h
@@ -522,6 +522,68 @@ public:
   virtual rtx expand (const function_instance &, tree, rtx) const override;
 };
 
+/* A function_base for Miscellaneous functions.  */
+class misc : public function_builder
+{
+public:
+  // use the same construction function as the function_builder
+  using function_builder::function_builder;
+
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+
+  virtual tree get_return_type (const function_instance &) const override;
+};
+
+/* A function_base for vreinterpret functions.  */
+class vreinterpret : public misc
+{
+public:
+  // use the same construction function as the misc
+  using misc::misc;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vlmul_ext functions.  */
+class vlmul_ext : public misc
+{
+public:
+  // use the same construction function as the misc
+  using misc::misc;
+  
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vlmul_trunc functions.  */
+class vlmul_trunc : public misc
+{
+public:
+  // use the same construction function as the misc
+  using misc::misc;
+  
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vundefined functions.  */
+class vundefined : public misc
+{
+public:
+  // use the same construction function as the misc
+  using misc::misc;
+
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+  
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
 } // namespace riscv_vector
 
 #endif // end GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv-vector-builtins-iterators.def b/gcc/config/riscv/riscv-vector-builtins-iterators.def
index 77a391c7630..8f2ea912804 100644
--- a/gcc/config/riscv/riscv-vector-builtins-iterators.def
+++ b/gcc/config/riscv/riscv-vector-builtins-iterators.def
@@ -118,6 +118,177 @@ DEF_RISCV_ARG_MODE_ATTR(V64BITI, 0, VNx2DI, VNx2DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(V64BITI, 1, VNx4DI, VNx4DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(V64BITI, 2, VNx8DI, VNx8DI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(V64BITI, 3, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VCONVERFI, 9)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 0, VNx2SF, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 1, VNx4SF, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 2, VNx8SF, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 3, VNx16SF, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 4, VNx32SF, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 5, VNx2DF, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 6, VNx4DF, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 7, VNx8DF, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERFI, 8, VNx16DF, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VCONVERI, 21)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 0, VNx2HI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 1, VNx4HI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 2, VNx8HI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 3, VNx16HI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 4, VNx32HI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 5, VNx64HI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 6, VNx2SI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 7, VNx4SI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 8, VNx8SI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 9, VNx16SI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 10, VNx32SI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 11, VNx2DI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 12, VNx4DI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 13, VNx8DI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 14, VNx16DI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 15, VNx4QI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 16, VNx8QI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 17, VNx16QI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 18, VNx32QI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 19, VNx64QI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI, 20, VNx128QI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VCONVERI2, 19)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 0, VNx2SI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 1, VNx4SI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 2, VNx8SI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 3, VNx16SI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 4, VNx32SI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 5, VNx2DI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 6, VNx4DI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 7, VNx8DI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 8, VNx16DI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 9, VNx8QI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 10, VNx16QI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 11, VNx32QI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 12, VNx64QI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 13, VNx128QI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 14, VNx4HI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 15, VNx8HI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 16, VNx16HI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 17, VNx32HI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI2, 18, VNx64HI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VCONVERI3, 16)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 0, VNx2DI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 1, VNx4DI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 2, VNx8DI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 3, VNx16DI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 4, VNx16QI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 5, VNx32QI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 6, VNx64QI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 7, VNx128QI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 8, VNx8HI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 9, VNx16HI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 10, VNx32HI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 11, VNx64HI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 12, VNx4SI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 13, VNx8SI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 14, VNx16SI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERI3, 15, VNx32SI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VCONVERF, 9)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 0, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 1, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 2, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 3, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 4, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 5, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 6, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 7, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VCONVERF, 8, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VSETI, 12)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 0, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 1, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 2, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 3, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 4, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 5, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 6, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 7, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 8, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 9, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 10, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VSETI, 11, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VSETF, 6)
+DEF_RISCV_ARG_MODE_ATTR(VSETF, 0, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSETF, 1, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSETF, 2, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSETF, 3, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSETF, 4, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VSETF, 5, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VGETI, 12)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 0, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 1, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 2, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 3, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 4, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 5, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 6, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 7, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 8, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 9, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 10, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VGETI, 11, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VGETF, 6)
+DEF_RISCV_ARG_MODE_ATTR(VGETF, 0, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VGETF, 1, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VGETF, 2, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VGETF, 3, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VGETF, 4, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VGETF, 5, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VLMULEXT, 25)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 0, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 1, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 2, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 3, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 4, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 5, VNx128QI, VNx128QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 6, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 7, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 8, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 9, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 10, VNx64HI, VNx64HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 11, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 12, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 13, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 14, VNx32SI, VNx32SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 15, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 16, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 17, VNx16DI, VNx16DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 18, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 19, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 20, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 21, VNx32SF, VNx32SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 22, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 23, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULEXT, 24, VNx16DF, VNx16DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VLMULTRUNC, 25)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 0, VNx2QI, VNx2QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 1, VNx4QI, VNx4QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 2, VNx8QI, VNx8QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 3, VNx16QI, VNx16QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 4, VNx32QI, VNx32QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 5, VNx64QI, VNx64QI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 6, VNx2HI, VNx2HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 7, VNx4HI, VNx4HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 8, VNx8HI, VNx8HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 9, VNx16HI, VNx16HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 10, VNx32HI, VNx32HI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 11, VNx2SI, VNx2SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 12, VNx4SI, VNx4SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 13, VNx8SI, VNx8SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 14, VNx16SI, VNx16SI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 15, VNx2DI, VNx2DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 16, VNx4DI, VNx4DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 17, VNx8DI, VNx8DI, TARGET_ANY)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 18, VNx2SF, VNx2SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 19, VNx4SF, VNx4SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 20, VNx8SF, VNx8SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 21, VNx16SF, VNx16SF, TARGET_HARD_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 22, VNx2DF, VNx2DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 23, VNx4DF, VNx4DF, TARGET_DOUBLE_FLOAT)
+DEF_RISCV_ARG_MODE_ATTR(VLMULTRUNC, 24, VNx8DF, VNx8DF, TARGET_DOUBLE_FLOAT)
 DEF_RISCV_ARG_MODE_ATTR_VARIABLE(VM, 69)
 DEF_RISCV_ARG_MODE_ATTR(VM, 0, VNx2BI, VNx2BI, TARGET_ANY)
 DEF_RISCV_ARG_MODE_ATTR(VM, 1, VNx4BI, VNx4BI, TARGET_ANY)
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 7a1f19b32ee..e88057e992a 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -5009,6 +5009,9 @@ riscv_hard_regno_mode_ok (unsigned int regno, machine_mode mode)
       if (rvv_mode_p (mode))
 	return false;
 	
+      if (!FLOAT_MODE_P (mode))
+	return false;
+	
       if (!FP_REG_P (regno + nregs - 1))
 	return false;
 
@@ -5073,13 +5076,13 @@ riscv_class_max_nregs (reg_class_t rclass, machine_mode mode)
   if (reg_class_subset_p (rclass, GR_REGS))
     return riscv_hard_regno_nregs (GP_REG_FIRST, mode);
 		
-  if (reg_class_subset_p (V_REGS, rclass))
+  if (reg_class_subset_p (rclass, V_REGS))
     return riscv_hard_regno_nregs (V_REG_FIRST, mode);
 
-  if (reg_class_subset_p (VL_REGS, rclass))
+  if (reg_class_subset_p (rclass, VL_REGS))
     return 1;
 
-  if (reg_class_subset_p (VTYPE_REGS, rclass))
+  if (reg_class_subset_p (rclass, VTYPE_REGS))
     return 1;
 		
   return 0;
@@ -5718,8 +5721,11 @@ riscv_slow_unaligned_access (machine_mode, unsigned int)
 /* Implement TARGET_CAN_CHANGE_MODE_CLASS.  */
 
 static bool
-riscv_can_change_mode_class (machine_mode, machine_mode, reg_class_t rclass)
+riscv_can_change_mode_class (machine_mode from, machine_mode to, reg_class_t rclass)
 {
+	if (rvv_mode_p (from) && rvv_mode_p (to))
+    return true;
+		
   return !reg_classes_intersect_p (FP_REGS, rclass);
 }
 
@@ -6003,6 +6009,25 @@ riscv_mangle_type (const_tree type)
   return NULL;
 }
 
+/* Implement REGMODE_NATURAL_SIZE.  */
+
+poly_uint64
+riscv_regmode_natural_size (machine_mode mode)
+{
+  /* The natural size for RVV data modes is one RVV data vector,
+     and similarly for predicates.  We can't independently modify
+     anything smaller than that.  */
+  /* ??? For now, only do this for variable-width RVV registers.
+     Doing it for constant-sized registers breaks lower-subreg.c.  */
+  /* For partial tuple vector, we should
+     use the mode of the vector subpart,
+     in case of segment loads and stores. */
+  if (!riscv_vector_chunks.is_constant () && rvv_mode_p (mode))
+    return BYTES_PER_RISCV_VECTOR;
+
+  return UNITS_PER_WORD;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index cb4cfc0f73e..c54c984e70b 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -1069,4 +1069,6 @@ extern void riscv_remove_unneeded_save_restore_calls (void);
 
 #define REGISTER_TARGET_PRAGMAS() riscv_register_pragmas ()
 
+#define REGMODE_NATURAL_SIZE(MODE) riscv_regmode_natural_size (MODE)
+
 #endif /* ! GCC_RISCV_H */
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index e01305ef3fc..df9011ee901 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -23,6 +23,18 @@
   UNSPEC_VSETVLI
   ;; RVV instructions.
   UNSPEC_RVV
+   ;; reinterpret
+  UNSPEC_REINTERPRET
+  ;; lmul_ext
+  UNSPEC_LMUL_EXT
+  ;; lmul_trunc
+  UNSPEC_LMUL_TRUNC
+  ;; set a vector
+  UNSPEC_SET_VECTOR
+  ;; get a vector
+  UNSPEC_GET_VECTOR
+  ;; vec_duplicate
+  UNSPEC_VEC_DUPLICATE
   ;; vector select
   UNSPEC_SELECT
   
@@ -85,6 +97,136 @@
 ;; All vector modes supported for integer sew = 64.
 (define_mode_iterator V64BITI [VNx2DI VNx4DI VNx8DI VNx16DI])
 
+;; vector integer and float-point mode interconversion.
+(define_mode_attr VCONVERFI [
+  (VNx2SF "VNx2SI")
+  (VNx4SF "VNx4SI")
+  (VNx8SF "VNx8SI")
+  (VNx16SF "VNx16SI")
+  (VNx32SF "VNx32SI")
+  (VNx2DF "VNx2DI")
+  (VNx4DF "VNx4DI")
+  (VNx8DF "VNx8DI")
+  (VNx16DF "VNx16DI")])
+
+;; vector integer same lmul but different sew interconversion.
+(define_mode_attr VCONVERI [
+  (VNx2HI "VNx4QI")
+  (VNx4HI "VNx8QI")
+  (VNx8HI "VNx16QI")
+  (VNx16HI "VNx32QI")
+  (VNx32HI "VNx64QI")
+  (VNx64HI "VNx128QI")
+  (VNx2SI "VNx8QI")
+  (VNx4SI "VNx16QI")
+  (VNx8SI "VNx32QI")
+  (VNx16SI "VNx64QI")
+  (VNx32SI "VNx128QI")
+  (VNx2DI "VNx16QI")
+  (VNx4DI "VNx32QI")
+  (VNx8DI "VNx64QI")
+  (VNx16DI "VNx128QI")
+  (VNx4QI "VNx2HI")
+  (VNx8QI "VNx4HI")
+  (VNx16QI "VNx8HI")
+  (VNx32QI "VNx16HI")
+  (VNx64QI "VNx32HI")
+  (VNx128QI "VNx64HI")])
+
+(define_mode_attr VCONVERI2 [
+  (VNx2SI "VNx4HI")
+  (VNx4SI "VNx8HI")
+  (VNx8SI "VNx16HI")
+  (VNx16SI "VNx32HI")
+  (VNx32SI "VNx64HI")
+  (VNx2DI "VNx8HI")
+  (VNx4DI "VNx16HI")
+  (VNx8DI "VNx32HI")
+  (VNx16DI "VNx64HI")
+  (VNx8QI "VNx2SI")
+  (VNx16QI "VNx4SI")
+  (VNx32QI "VNx8SI")
+  (VNx64QI "VNx16SI")
+  (VNx128QI "VNx32SI")
+  (VNx4HI "VNx2SI")
+  (VNx8HI "VNx4SI")
+  (VNx16HI "VNx8SI")
+  (VNx32HI "VNx16SI")
+  (VNx64HI "VNx32SI")])
+
+(define_mode_attr VCONVERI3 [
+  (VNx2DI "VNx4SI")
+  (VNx4DI "VNx8SI")
+  (VNx8DI "VNx16SI")
+  (VNx16DI "VNx32SI")
+  (VNx16QI "VNx2DI")
+  (VNx32QI "VNx4DI")
+  (VNx64QI "VNx8DI")
+  (VNx128QI "VNx16DI")
+  (VNx8HI "VNx2DI")
+  (VNx16HI "VNx4DI")
+  (VNx32HI "VNx8DI")
+  (VNx64HI "VNx16DI")
+  (VNx4SI "VNx2DI")
+  (VNx8SI "VNx4DI")
+  (VNx16SI "VNx8DI")
+  (VNx32SI "VNx16DI")])
+
+;; vector iterator integer and float-point mode interconversion.
+(define_mode_iterator VCONVERF [
+  (VNx2SF "TARGET_HARD_FLOAT")
+  (VNx4SF "TARGET_HARD_FLOAT")
+  (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_HARD_FLOAT")
+  (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT")
+  (VNx4DF "TARGET_DOUBLE_FLOAT")
+  (VNx8DF "TARGET_DOUBLE_FLOAT")
+  (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+;; vector modes can be set.
+(define_mode_iterator VSETI [
+  VNx32QI VNx64QI VNx128QI
+  VNx16HI VNx32HI VNx64HI
+  VNx8SI VNx16SI VNx32SI
+  VNx4DI VNx8DI VNx16DI])
+
+(define_mode_iterator VSETF [
+  (VNx8SF "TARGET_HARD_FLOAT") (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT") (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+;; vector modes can be get.
+(define_mode_iterator VGETI [
+  VNx16QI VNx32QI VNx64QI
+  VNx8HI VNx16HI VNx32HI
+  VNx4SI VNx8SI VNx16SI
+  VNx2DI VNx4DI VNx8DI])
+
+(define_mode_iterator VGETF [
+  (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT") (VNx16SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")])
+
+;; All vector extend modes supported.
+(define_mode_iterator VLMULEXT [
+  VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI VNx128QI
+  VNx4HI VNx8HI VNx16HI VNx32HI VNx64HI
+  VNx4SI VNx8SI VNx16SI VNx32SI
+  VNx4DI VNx8DI VNx16DI
+  (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_HARD_FLOAT") (VNx32SF "TARGET_HARD_FLOAT")
+  (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")
+  (VNx16DF "TARGET_DOUBLE_FLOAT")])
+
+;; All vector truncate modes supported.
+(define_mode_iterator VLMULTRUNC [
+  VNx2QI VNx4QI VNx8QI VNx16QI VNx32QI VNx64QI
+  VNx2HI VNx4HI VNx8HI VNx16HI VNx32HI
+  VNx2SI VNx4SI VNx8SI VNx16SI
+  VNx2DI VNx4DI VNx8DI
+  (VNx2SF "TARGET_HARD_FLOAT") (VNx4SF "TARGET_HARD_FLOAT") (VNx8SF "TARGET_HARD_FLOAT")
+  (VNx16SF "TARGET_HARD_FLOAT")
+  (VNx2DF "TARGET_DOUBLE_FLOAT") (VNx4DF "TARGET_DOUBLE_FLOAT") (VNx8DF "TARGET_DOUBLE_FLOAT")])
+
 ;; Map a vector int or float mode to a vector compare mode.
 (define_mode_attr VM [
   (VNx2BI "VNx2BI") (VNx4BI "VNx4BI") (VNx8BI "VNx8BI") (VNx16BI "VNx16BI")
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index 1731d969372..54e68aa165b 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -349,6 +349,139 @@
   [(set_attr "type" "vsetvl")
    (set_attr "mode" "none")])
 
+;; -------------------------------------------------------------------------
+;; ---- [INT,FP] Vector Misc Functions
+;; -------------------------------------------------------------------------
+;; Includes:
+;; - Reinterpret between different vector modes
+;; - Vector LMUL extension
+;; - Vector LMUL truncation
+;; - insert a vector to a vector group
+;; - get a vector from a vector group
+;; -------------------------------------------------------------------------
+
+;; Reinterpret operand 1 in operand 0's mode, without changing its contents.
+;; This is equivalent to a subreg on little-endian targets but not for
+;; big-endian; see the comment at the head of the file for details.
+(define_expand "@vreinterpret<mode>"
+  [(set (match_operand:V 0 "register_operand")
+    (unspec:V
+      [(match_operand 1 "vector_any_register_operand")] UNSPEC_REINTERPRET))]
+  "TARGET_VECTOR"
+{
+  machine_mode src_mode = GET_MODE (operands[1]);
+  if (targetm.can_change_mode_class (<MODE>mode, src_mode, FP_REGS))
+    {
+      emit_move_insn (operands[0], gen_lowpart (<MODE>mode, operands[1]));
+      DONE;
+    }
+})
+
+;; Vector LMUL extension
+(define_expand "@vlmul_ext<mode>"
+  [(set (match_operand:VLMULEXT 0 "register_operand")
+    (unspec:VLMULEXT
+      [(match_operand 1 "vector_any_register_operand")] UNSPEC_LMUL_EXT))]
+  "TARGET_VECTOR"
+{
+})
+
+(define_insn_and_split "*vlmul_ext<VLMULEXT:mode><V:mode>"
+  [(set (match_operand:VLMULEXT 0 "register_operand" "=vr, ?&vr")
+    (unspec:VLMULEXT
+      [(match_operand:V 1 "register_operand" "0, vr")] UNSPEC_LMUL_EXT))]
+  "TARGET_VECTOR"
+  "#"
+  "&& reload_completed"
+  [(const_int 0)]
+  {
+    rtx subreg = simplify_gen_subreg (<V:MODE>mode, operands[0], <VLMULEXT:MODE>mode, 0);
+    riscv_emit_move (subreg, operands[1]);
+    DONE;
+  })
+
+;; Vector LMUL truncation
+(define_expand "@vlmul_trunc<mode>"
+  [(set (match_operand:VLMULTRUNC 0 "register_operand")
+    (unspec:VLMULTRUNC
+      [(match_operand 1 "vector_any_register_operand")] UNSPEC_LMUL_TRUNC))]
+  "TARGET_VECTOR"
+{
+  rtx subreg = simplify_gen_subreg (<MODE>mode, operands[1], GET_MODE (operands[1]), 0);
+  riscv_emit_move (operands[0], subreg);
+  DONE;
+})
+
+;; insert a vector to a vector group
+(define_expand "@vset<mode>"
+  [(set (match_operand:VSETI 0 "register_operand")
+    (unspec:VSETI
+      [(match_operand:VSETI 1 "register_operand" "0")
+       (match_operand 2 "const_int_operand")
+       (match_operand 3 "vector_any_register_operand")] UNSPEC_SET_VECTOR))]
+  "TARGET_VECTOR"
+{
+  unsigned int nvecs = exact_div (GET_MODE_SIZE (GET_MODE (operands[0])),
+        GET_MODE_SIZE (GET_MODE (operands[3]))).to_constant ();
+  poly_int64 offset = (INTVAL (operands[2]) & (nvecs - 1))
+        * GET_MODE_SIZE (GET_MODE (operands[3]));
+  rtx subreg = simplify_gen_subreg (GET_MODE (operands[3]), operands[1], GET_MODE (operands[1]), offset);
+  riscv_emit_move (subreg, operands[3]);
+  riscv_emit_move (operands[0], operands[1]);
+  DONE;
+})
+
+(define_expand "@vset<mode>"
+  [(set (match_operand:VSETF 0 "register_operand")
+    (unspec:VSETF
+      [(match_operand:VSETF 1 "register_operand" "0")
+       (match_operand 2 "const_int_operand")
+       (match_operand 3 "vector_any_register_operand")] UNSPEC_SET_VECTOR))]
+  "TARGET_VECTOR"
+{
+  unsigned int nvecs = exact_div (GET_MODE_SIZE (GET_MODE (operands[0])),
+        GET_MODE_SIZE (GET_MODE (operands[3]))).to_constant ();
+  poly_int64 offset = (INTVAL (operands[2]) & (nvecs - 1))
+        * GET_MODE_SIZE (GET_MODE (operands[3]));
+  rtx subreg = simplify_gen_subreg (GET_MODE (operands[3]), operands[1], GET_MODE (operands[1]), offset);
+  riscv_emit_move (subreg, operands[3]);
+  riscv_emit_move (operands[0], operands[1]);
+  DONE;
+})
+
+;; get a vector from a vector group
+(define_expand "@vget<mode>"
+  [(set (match_operand:VGETI 0 "register_operand")
+    (unspec:VGETI
+      [(match_operand 1 "vector_any_register_operand")
+       (match_operand 2 "const_int_operand")] UNSPEC_GET_VECTOR))]
+  "TARGET_VECTOR"
+{
+  unsigned int nvecs = exact_div (GET_MODE_SIZE (GET_MODE (operands[1])),
+        GET_MODE_SIZE (GET_MODE (operands[0]))).to_constant ();
+  poly_int64 offset = (INTVAL (operands[2]) & (nvecs - 1))
+        * GET_MODE_SIZE (GET_MODE (operands[0]));
+  rtx subreg = simplify_gen_subreg (GET_MODE (operands[0]), operands[1], GET_MODE (operands[1]), offset);
+  riscv_emit_move (operands[0], subreg);
+  DONE;
+})
+
+(define_expand "@vget<mode>"
+  [(set (match_operand:VGETF 0 "register_operand")
+    (unspec:VGETF
+      [(match_operand 1 "vector_any_register_operand")
+       (match_operand 2 "const_int_operand")] UNSPEC_GET_VECTOR))]
+  "TARGET_VECTOR"
+{
+  unsigned int nvecs = exact_div (GET_MODE_SIZE (GET_MODE (operands[1])),
+        GET_MODE_SIZE (GET_MODE (operands[0]))).to_constant ();
+  poly_int64 offset = (INTVAL (operands[2]) & (nvecs - 1))
+        * GET_MODE_SIZE (GET_MODE (operands[0]));
+  rtx subreg = simplify_gen_subreg (GET_MODE (operands[0]), operands[1], GET_MODE (operands[1]), offset);
+  riscv_emit_move (operands[0], subreg);
+  DONE;
+})
+
 ;; -------------------------------------------------------------------------------
 ;; ---- 7. Vector Loads and Stores
 ;; -------------------------------------------------------------------------------
diff --git a/gcc/testsuite/g++.target/riscv/rvv/misc_func.C b/gcc/testsuite/g++.target/riscv/rvv/misc_func.C
new file mode 100644
index 00000000000..b527bce1cab
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/misc_func.C
@@ -0,0 +1,2597 @@
+/* { dg-do compile } */
+/* { dg-skip-if "test vector intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
+
+#include <stddef.h>
+#include <riscv_vector.h>
+
+vuint8mf8_t test_vreinterpret_v_i8mf8_u8mf8(vint8mf8_t src)
+{
+  vuint8mf8_t a = vreinterpret_u8mf8(src);
+  return a;
+}
+
+vuint8mf4_t test_vreinterpret_v_i8mf4_u8mf4(vint8mf4_t src)
+{
+  vuint8mf4_t a = vreinterpret_u8mf4(src);
+  return a;
+}
+
+vuint8mf2_t test_vreinterpret_v_i8mf2_u8mf2(vint8mf2_t src)
+{
+  vuint8mf2_t a = vreinterpret_u8mf2(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_i8m1_u8m1(vint8m1_t src)
+{
+  vuint8m1_t a = vreinterpret_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_i8m2_u8m2(vint8m2_t src)
+{
+  vuint8m2_t a = vreinterpret_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_i8m4_u8m4(vint8m4_t src)
+{
+  vuint8m4_t a = vreinterpret_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_i8m8_u8m8(vint8m8_t src)
+{
+  vuint8m8_t a = vreinterpret_u8m8(src);
+  return a;
+}
+
+vint8mf8_t test_vreinterpret_v_u8mf8_i8mf8(vuint8mf8_t src)
+{
+  vint8mf8_t a = vreinterpret_i8mf8(src);
+  return a;
+}
+
+vint8mf4_t test_vreinterpret_v_u8mf4_i8mf4(vuint8mf4_t src)
+{
+  vint8mf4_t a = vreinterpret_i8mf4(src);
+  return a;
+}
+
+vint8mf2_t test_vreinterpret_v_u8mf2_i8mf2(vuint8mf2_t src)
+{
+  vint8mf2_t a = vreinterpret_i8mf2(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_u8m1_i8m1(vuint8m1_t src)
+{
+  vint8m1_t a = vreinterpret_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_u8m2_i8m2(vuint8m2_t src)
+{
+  vint8m2_t a = vreinterpret_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_u8m4_i8m4(vuint8m4_t src)
+{
+  vint8m4_t a = vreinterpret_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_u8m8_i8m8(vuint8m8_t src)
+{
+  vint8m8_t a = vreinterpret_i8m8(src);
+  return a;
+}
+
+vuint16mf4_t test_vreinterpret_v_i16mf4_u16mf4(vint16mf4_t src)
+{
+  vuint16mf4_t a = vreinterpret_u16mf4(src);
+  return a;
+}
+
+vuint16mf2_t test_vreinterpret_v_i16mf2_u16mf2(vint16mf2_t src)
+{
+  vuint16mf2_t a = vreinterpret_u16mf2(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_i16m1_u16m1(vint16m1_t src)
+{
+  vuint16m1_t a = vreinterpret_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_i16m2_u16m2(vint16m2_t src)
+{
+  vuint16m2_t a = vreinterpret_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_i16m4_u16m4(vint16m4_t src)
+{
+  vuint16m4_t a = vreinterpret_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_i16m8_u16m8(vint16m8_t src)
+{
+  vuint16m8_t a = vreinterpret_u16m8(src);
+  return a;
+}
+
+vint16mf4_t test_vreinterpret_v_u16mf4_i16mf4(vuint16mf4_t src)
+{
+  vint16mf4_t a = vreinterpret_i16mf4(src);
+  return a;
+}
+
+vint16mf2_t test_vreinterpret_v_u16mf2_i16mf2(vuint16mf2_t src)
+{
+  vint16mf2_t a = vreinterpret_i16mf2(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_u16m1_i16m1(vuint16m1_t src)
+{
+  vint16m1_t a = vreinterpret_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_u16m2_i16m2(vuint16m2_t src)
+{
+  vint16m2_t a = vreinterpret_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_u16m4_i16m4(vuint16m4_t src)
+{
+  vint16m4_t a = vreinterpret_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_u16m8_i16m8(vuint16m8_t src)
+{
+  vint16m8_t a = vreinterpret_i16m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_i32mf2_u32mf2(vint32mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_i32m1_u32m1(vint32m1_t src)
+{
+  vuint32m1_t a = vreinterpret_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_i32m2_u32m2(vint32m2_t src)
+{
+  vuint32m2_t a = vreinterpret_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_i32m4_u32m4(vint32m4_t src)
+{
+  vuint32m4_t a = vreinterpret_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_i32m8_u32m8(vint32m8_t src)
+{
+  vuint32m8_t a = vreinterpret_u32m8(src);
+  return a;
+}
+
+vfloat32mf2_t test_vreinterpret_v_i32mf2_f32mf2(vint32mf2_t src)
+{
+  vfloat32mf2_t a = vreinterpret_f32mf2(src);
+  return a;
+}
+
+vfloat32m1_t test_vreinterpret_v_i32m1_f32m1(vint32m1_t src)
+{
+  vfloat32m1_t a = vreinterpret_f32m1(src);
+  return a;
+}
+
+vfloat32m2_t test_vreinterpret_v_i32m2_f32m2(vint32m2_t src)
+{
+  vfloat32m2_t a = vreinterpret_f32m2(src);
+  return a;
+}
+
+vfloat32m4_t test_vreinterpret_v_i32m4_f32m4(vint32m4_t src)
+{
+  vfloat32m4_t a = vreinterpret_f32m4(src);
+  return a;
+}
+
+vfloat32m8_t test_vreinterpret_v_i32m8_f32m8(vint32m8_t src)
+{
+  vfloat32m8_t a = vreinterpret_f32m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_u32mf2_i32mf2(vuint32mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_u32m1_i32m1(vuint32m1_t src)
+{
+  vint32m1_t a = vreinterpret_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_u32m2_i32m2(vuint32m2_t src)
+{
+  vint32m2_t a = vreinterpret_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_u32m4_i32m4(vuint32m4_t src)
+{
+  vint32m4_t a = vreinterpret_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_u32m8_i32m8(vuint32m8_t src)
+{
+  vint32m8_t a = vreinterpret_i32m8(src);
+  return a;
+}
+
+vfloat32mf2_t test_vreinterpret_v_u32mf2_f32mf2(vuint32mf2_t src)
+{
+  vfloat32mf2_t a = vreinterpret_f32mf2(src);
+  return a;
+}
+
+vfloat32m1_t test_vreinterpret_v_u32m1_f32m1(vuint32m1_t src)
+{
+  vfloat32m1_t a = vreinterpret_f32m1(src);
+  return a;
+}
+
+vfloat32m2_t test_vreinterpret_v_u32m2_f32m2(vuint32m2_t src)
+{
+  vfloat32m2_t a = vreinterpret_f32m2(src);
+  return a;
+}
+
+vfloat32m4_t test_vreinterpret_v_u32m4_f32m4(vuint32m4_t src)
+{
+  vfloat32m4_t a = vreinterpret_f32m4(src);
+  return a;
+}
+
+vfloat32m8_t test_vreinterpret_v_u32m8_f32m8(vuint32m8_t src)
+{
+  vfloat32m8_t a = vreinterpret_f32m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_f32mf2_i32mf2(vfloat32mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_f32m1_i32m1(vfloat32m1_t src)
+{
+  vint32m1_t a = vreinterpret_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_f32m2_i32m2(vfloat32m2_t src)
+{
+  vint32m2_t a = vreinterpret_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_f32m4_i32m4(vfloat32m4_t src)
+{
+  vint32m4_t a = vreinterpret_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_f32m8_i32m8(vfloat32m8_t src)
+{
+  vint32m8_t a = vreinterpret_i32m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_f32mf2_u32mf2(vfloat32mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_f32m1_u32m1(vfloat32m1_t src)
+{
+  vuint32m1_t a = vreinterpret_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_f32m2_u32m2(vfloat32m2_t src)
+{
+  vuint32m2_t a = vreinterpret_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_f32m4_u32m4(vfloat32m4_t src)
+{
+  vuint32m4_t a = vreinterpret_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_f32m8_u32m8(vfloat32m8_t src)
+{
+  vuint32m8_t a = vreinterpret_u32m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_i64m1_u64m1(vint64m1_t src)
+{
+  vuint64m1_t a = vreinterpret_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_i64m2_u64m2(vint64m2_t src)
+{
+  vuint64m2_t a = vreinterpret_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_i64m4_u64m4(vint64m4_t src)
+{
+  vuint64m4_t a = vreinterpret_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_i64m8_u64m8(vint64m8_t src)
+{
+  vuint64m8_t a = vreinterpret_u64m8(src);
+  return a;
+}
+
+vfloat64m1_t test_vreinterpret_v_i64m1_f64m1(vint64m1_t src)
+{
+  vfloat64m1_t a = vreinterpret_f64m1(src);
+  return a;
+}
+
+vfloat64m2_t test_vreinterpret_v_i64m2_f64m2(vint64m2_t src)
+{
+  vfloat64m2_t a = vreinterpret_f64m2(src);
+  return a;
+}
+
+vfloat64m4_t test_vreinterpret_v_i64m4_f64m4(vint64m4_t src)
+{
+  vfloat64m4_t a = vreinterpret_f64m4(src);
+  return a;
+}
+
+vfloat64m8_t test_vreinterpret_v_i64m8_f64m8(vint64m8_t src)
+{
+  vfloat64m8_t a = vreinterpret_f64m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_u64m1_i64m1(vuint64m1_t src)
+{
+  vint64m1_t a = vreinterpret_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_u64m2_i64m2(vuint64m2_t src)
+{
+  vint64m2_t a = vreinterpret_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_u64m4_i64m4(vuint64m4_t src)
+{
+  vint64m4_t a = vreinterpret_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_u64m8_i64m8(vuint64m8_t src)
+{
+  vint64m8_t a = vreinterpret_i64m8(src);
+  return a;
+}
+
+vfloat64m1_t test_vreinterpret_v_u64m1_f64m1(vuint64m1_t src)
+{
+  vfloat64m1_t a = vreinterpret_f64m1(src);
+  return a;
+}
+
+vfloat64m2_t test_vreinterpret_v_u64m2_f64m2(vuint64m2_t src)
+{
+  vfloat64m2_t a = vreinterpret_f64m2(src);
+  return a;
+}
+
+vfloat64m4_t test_vreinterpret_v_u64m4_f64m4(vuint64m4_t src)
+{
+  vfloat64m4_t a = vreinterpret_f64m4(src);
+  return a;
+}
+
+vfloat64m8_t test_vreinterpret_v_u64m8_f64m8(vuint64m8_t src)
+{
+  vfloat64m8_t a = vreinterpret_f64m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_f64m1_i64m1(vfloat64m1_t src)
+{
+  vint64m1_t a = vreinterpret_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_f64m2_i64m2(vfloat64m2_t src)
+{
+  vint64m2_t a = vreinterpret_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_f64m4_i64m4(vfloat64m4_t src)
+{
+  vint64m4_t a = vreinterpret_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_f64m8_i64m8(vfloat64m8_t src)
+{
+  vint64m8_t a = vreinterpret_i64m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_f64m1_u64m1(vfloat64m1_t src)
+{
+  vuint64m1_t a = vreinterpret_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_f64m2_u64m2(vfloat64m2_t src)
+{
+  vuint64m2_t a = vreinterpret_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_f64m4_u64m4(vfloat64m4_t src)
+{
+  vuint64m4_t a = vreinterpret_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_f64m8_u64m8(vfloat64m8_t src)
+{
+  vuint64m8_t a = vreinterpret_u64m8(src);
+  return a;
+}
+
+vint16mf4_t test_vreinterpret_v_i8mf4_i16mf4(vint8mf4_t src)
+{
+  vint16mf4_t a = vreinterpret_i16mf4(src);
+  return a;
+}
+
+vint16mf2_t test_vreinterpret_v_i8mf2_i16mf2(vint8mf2_t src)
+{
+  vint16mf2_t a = vreinterpret_i16mf2(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_i8m1_i16m1(vint8m1_t src)
+{
+  vint16m1_t a = vreinterpret_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_i8m2_i16m2(vint8m2_t src)
+{
+  vint16m2_t a = vreinterpret_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_i8m4_i16m4(vint8m4_t src)
+{
+  vint16m4_t a = vreinterpret_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_i8m8_i16m8(vint8m8_t src)
+{
+  vint16m8_t a = vreinterpret_i16m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_i8mf2_i32mf2(vint8mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_i8m1_i32m1(vint8m1_t src)
+{
+  vint32m1_t a = vreinterpret_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_i8m2_i32m2(vint8m2_t src)
+{
+  vint32m2_t a = vreinterpret_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_i8m4_i32m4(vint8m4_t src)
+{
+  vint32m4_t a = vreinterpret_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_i8m8_i32m8(vint8m8_t src)
+{
+  vint32m8_t a = vreinterpret_i32m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_i8m1_i64m1(vint8m1_t src)
+{
+  vint64m1_t a = vreinterpret_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_i8m2_i64m2(vint8m2_t src)
+{
+  vint64m2_t a = vreinterpret_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_i8m4_i64m4(vint8m4_t src)
+{
+  vint64m4_t a = vreinterpret_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_i8m8_i64m8(vint8m8_t src)
+{
+  vint64m8_t a = vreinterpret_i64m8(src);
+  return a;
+}
+
+vint8mf4_t test_vreinterpret_v_i16mf4_i8mf4(vint16mf4_t src)
+{
+  vint8mf4_t a = vreinterpret_i8mf4(src);
+  return a;
+}
+
+vint8mf2_t test_vreinterpret_v_i16mf2_i8mf2(vint16mf2_t src)
+{
+  vint8mf2_t a = vreinterpret_i8mf2(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_i16m1_i8m1(vint16m1_t src)
+{
+  vint8m1_t a = vreinterpret_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_i16m2_i8m2(vint16m2_t src)
+{
+  vint8m2_t a = vreinterpret_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_i16m4_i8m4(vint16m4_t src)
+{
+  vint8m4_t a = vreinterpret_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_i16m8_i8m8(vint16m8_t src)
+{
+  vint8m8_t a = vreinterpret_i8m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_i16mf2_i32mf2(vint16mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_i16m1_i32m1(vint16m1_t src)
+{
+  vint32m1_t a = vreinterpret_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_i16m2_i32m2(vint16m2_t src)
+{
+  vint32m2_t a = vreinterpret_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_i16m4_i32m4(vint16m4_t src)
+{
+  vint32m4_t a = vreinterpret_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_i16m8_i32m8(vint16m8_t src)
+{
+  vint32m8_t a = vreinterpret_i32m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_i16m1_i64m1(vint16m1_t src)
+{
+  vint64m1_t a = vreinterpret_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_i16m2_i64m2(vint16m2_t src)
+{
+  vint64m2_t a = vreinterpret_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_i16m4_i64m4(vint16m4_t src)
+{
+  vint64m4_t a = vreinterpret_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_i16m8_i64m8(vint16m8_t src)
+{
+  vint64m8_t a = vreinterpret_i64m8(src);
+  return a;
+}
+
+vint8mf2_t test_vreinterpret_v_i32mf2_i8mf2(vint32mf2_t src)
+{
+  vint8mf2_t a = vreinterpret_i8mf2(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_i32m1_i8m1(vint32m1_t src)
+{
+  vint8m1_t a = vreinterpret_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_i32m2_i8m2(vint32m2_t src)
+{
+  vint8m2_t a = vreinterpret_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_i32m4_i8m4(vint32m4_t src)
+{
+  vint8m4_t a = vreinterpret_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_i32m8_i8m8(vint32m8_t src)
+{
+  vint8m8_t a = vreinterpret_i8m8(src);
+  return a;
+}
+
+vint16mf2_t test_vreinterpret_v_i32mf2_i16mf2(vint32mf2_t src)
+{
+  vint16mf2_t a = vreinterpret_i16mf2(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_i32m1_i16m1(vint32m1_t src)
+{
+  vint16m1_t a = vreinterpret_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_i32m2_i16m2(vint32m2_t src)
+{
+  vint16m2_t a = vreinterpret_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_i32m4_i16m4(vint32m4_t src)
+{
+  vint16m4_t a = vreinterpret_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_i32m8_i16m8(vint32m8_t src)
+{
+  vint16m8_t a = vreinterpret_i16m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_i32m1_i64m1(vint32m1_t src)
+{
+  vint64m1_t a = vreinterpret_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_i32m2_i64m2(vint32m2_t src)
+{
+  vint64m2_t a = vreinterpret_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_i32m4_i64m4(vint32m4_t src)
+{
+  vint64m4_t a = vreinterpret_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_i32m8_i64m8(vint32m8_t src)
+{
+  vint64m8_t a = vreinterpret_i64m8(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_i64m1_i8m1(vint64m1_t src)
+{
+  vint8m1_t a = vreinterpret_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_i64m2_i8m2(vint64m2_t src)
+{
+  vint8m2_t a = vreinterpret_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_i64m4_i8m4(vint64m4_t src)
+{
+  vint8m4_t a = vreinterpret_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_i64m8_i8m8(vint64m8_t src)
+{
+  vint8m8_t a = vreinterpret_i8m8(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_i64m1_i16m1(vint64m1_t src)
+{
+  vint16m1_t a = vreinterpret_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_i64m2_i16m2(vint64m2_t src)
+{
+  vint16m2_t a = vreinterpret_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_i64m4_i16m4(vint64m4_t src)
+{
+  vint16m4_t a = vreinterpret_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_i64m8_i16m8(vint64m8_t src)
+{
+  vint16m8_t a = vreinterpret_i16m8(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_i64m1_i32m1(vint64m1_t src)
+{
+  vint32m1_t a = vreinterpret_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_i64m2_i32m2(vint64m2_t src)
+{
+  vint32m2_t a = vreinterpret_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_i64m4_i32m4(vint64m4_t src)
+{
+  vint32m4_t a = vreinterpret_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_i64m8_i32m8(vint64m8_t src)
+{
+  vint32m8_t a = vreinterpret_i32m8(src);
+  return a;
+}
+
+vuint16mf4_t test_vreinterpret_v_u8mf4_u16mf4(vuint8mf4_t src)
+{
+  vuint16mf4_t a = vreinterpret_u16mf4(src);
+  return a;
+}
+
+vuint16mf2_t test_vreinterpret_v_u8mf2_u16mf2(vuint8mf2_t src)
+{
+  vuint16mf2_t a = vreinterpret_u16mf2(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_u8m1_u16m1(vuint8m1_t src)
+{
+  vuint16m1_t a = vreinterpret_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_u8m2_u16m2(vuint8m2_t src)
+{
+  vuint16m2_t a = vreinterpret_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_u8m4_u16m4(vuint8m4_t src)
+{
+  vuint16m4_t a = vreinterpret_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_u8m8_u16m8(vuint8m8_t src)
+{
+  vuint16m8_t a = vreinterpret_u16m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_u8mf2_u32mf2(vuint8mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_u8m1_u32m1(vuint8m1_t src)
+{
+  vuint32m1_t a = vreinterpret_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_u8m2_u32m2(vuint8m2_t src)
+{
+  vuint32m2_t a = vreinterpret_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_u8m4_u32m4(vuint8m4_t src)
+{
+  vuint32m4_t a = vreinterpret_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_u8m8_u32m8(vuint8m8_t src)
+{
+  vuint32m8_t a = vreinterpret_u32m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_u8m1_u64m1(vuint8m1_t src)
+{
+  vuint64m1_t a = vreinterpret_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_u8m2_u64m2(vuint8m2_t src)
+{
+  vuint64m2_t a = vreinterpret_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_u8m4_u64m4(vuint8m4_t src)
+{
+  vuint64m4_t a = vreinterpret_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_u8m8_u64m8(vuint8m8_t src)
+{
+  vuint64m8_t a = vreinterpret_u64m8(src);
+  return a;
+}
+
+vuint8mf4_t test_vreinterpret_v_u16mf4_u8mf4(vuint16mf4_t src)
+{
+  vuint8mf4_t a = vreinterpret_u8mf4(src);
+  return a;
+}
+
+vuint8mf2_t test_vreinterpret_v_u16mf2_u8mf2(vuint16mf2_t src)
+{
+  vuint8mf2_t a = vreinterpret_u8mf2(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_u16m1_u8m1(vuint16m1_t src)
+{
+  vuint8m1_t a = vreinterpret_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_u16m2_u8m2(vuint16m2_t src)
+{
+  vuint8m2_t a = vreinterpret_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_u16m4_u8m4(vuint16m4_t src)
+{
+  vuint8m4_t a = vreinterpret_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_u16m8_u8m8(vuint16m8_t src)
+{
+  vuint8m8_t a = vreinterpret_u8m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_u16mf2_u32mf2(vuint16mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_u16m1_u32m1(vuint16m1_t src)
+{
+  vuint32m1_t a = vreinterpret_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_u16m2_u32m2(vuint16m2_t src)
+{
+  vuint32m2_t a = vreinterpret_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_u16m4_u32m4(vuint16m4_t src)
+{
+  vuint32m4_t a = vreinterpret_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_u16m8_u32m8(vuint16m8_t src)
+{
+  vuint32m8_t a = vreinterpret_u32m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_u16m1_u64m1(vuint16m1_t src)
+{
+  vuint64m1_t a = vreinterpret_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_u16m2_u64m2(vuint16m2_t src)
+{
+  vuint64m2_t a = vreinterpret_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_u16m4_u64m4(vuint16m4_t src)
+{
+  vuint64m4_t a = vreinterpret_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_u16m8_u64m8(vuint16m8_t src)
+{
+  vuint64m8_t a = vreinterpret_u64m8(src);
+  return a;
+}
+
+vuint8mf2_t test_vreinterpret_v_u32mf2_u8mf2(vuint32mf2_t src)
+{
+  vuint8mf2_t a = vreinterpret_u8mf2(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_u32m1_u8m1(vuint32m1_t src)
+{
+  vuint8m1_t a = vreinterpret_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_u32m2_u8m2(vuint32m2_t src)
+{
+  vuint8m2_t a = vreinterpret_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_u32m4_u8m4(vuint32m4_t src)
+{
+  vuint8m4_t a = vreinterpret_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_u32m8_u8m8(vuint32m8_t src)
+{
+  vuint8m8_t a = vreinterpret_u8m8(src);
+  return a;
+}
+
+vuint16mf2_t test_vreinterpret_v_u32mf2_u16mf2(vuint32mf2_t src)
+{
+  vuint16mf2_t a = vreinterpret_u16mf2(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_u32m1_u16m1(vuint32m1_t src)
+{
+  vuint16m1_t a = vreinterpret_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_u32m2_u16m2(vuint32m2_t src)
+{
+  vuint16m2_t a = vreinterpret_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_u32m4_u16m4(vuint32m4_t src)
+{
+  vuint16m4_t a = vreinterpret_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_u32m8_u16m8(vuint32m8_t src)
+{
+  vuint16m8_t a = vreinterpret_u16m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_u32m1_u64m1(vuint32m1_t src)
+{
+  vuint64m1_t a = vreinterpret_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_u32m2_u64m2(vuint32m2_t src)
+{
+  vuint64m2_t a = vreinterpret_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_u32m4_u64m4(vuint32m4_t src)
+{
+  vuint64m4_t a = vreinterpret_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_u32m8_u64m8(vuint32m8_t src)
+{
+  vuint64m8_t a = vreinterpret_u64m8(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_u64m1_u8m1(vuint64m1_t src)
+{
+  vuint8m1_t a = vreinterpret_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_u64m2_u8m2(vuint64m2_t src)
+{
+  vuint8m2_t a = vreinterpret_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_u64m4_u8m4(vuint64m4_t src)
+{
+  vuint8m4_t a = vreinterpret_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_u64m8_u8m8(vuint64m8_t src)
+{
+  vuint8m8_t a = vreinterpret_u8m8(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_u64m1_u16m1(vuint64m1_t src)
+{
+  vuint16m1_t a = vreinterpret_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_u64m2_u16m2(vuint64m2_t src)
+{
+  vuint16m2_t a = vreinterpret_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_u64m4_u16m4(vuint64m4_t src)
+{
+  vuint16m4_t a = vreinterpret_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_u64m8_u16m8(vuint64m8_t src)
+{
+  vuint16m8_t a = vreinterpret_u16m8(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_u64m1_u32m1(vuint64m1_t src)
+{
+  vuint32m1_t a = vreinterpret_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_u64m2_u32m2(vuint64m2_t src)
+{
+  vuint32m2_t a = vreinterpret_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_u64m4_u32m4(vuint64m4_t src)
+{
+  vuint32m4_t a = vreinterpret_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_u64m8_u32m8(vuint64m8_t src)
+{
+  vuint32m8_t a = vreinterpret_u32m8(src);
+  return a;
+}
+
+vint8mf4_t test_vlmul_ext_v_i8mf8_i8mf4(vint8mf8_t op1)
+{
+  vint8mf4_t a = vlmul_ext_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_ext_v_i8mf8_i8mf2(vint8mf8_t op1)
+{
+  vint8mf2_t a = vlmul_ext_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_ext_v_i8mf8_i8m1(vint8mf8_t op1)
+{
+  vint8m1_t a = vlmul_ext_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8mf8_i8m2(vint8mf8_t op1)
+{
+  vint8m2_t a = vlmul_ext_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8mf8_i8m4(vint8mf8_t op1)
+{
+  vint8m4_t a = vlmul_ext_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8mf8_i8m8(vint8mf8_t op1)
+{
+  vint8m8_t a = vlmul_ext_i8m8(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_ext_v_i8mf4_i8mf2(vint8mf4_t op1)
+{
+  vint8mf2_t a = vlmul_ext_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_ext_v_i8mf4_i8m1(vint8mf4_t op1)
+{
+  vint8m1_t a = vlmul_ext_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8mf4_i8m2(vint8mf4_t op1)
+{
+  vint8m2_t a = vlmul_ext_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8mf4_i8m4(vint8mf4_t op1)
+{
+  vint8m4_t a = vlmul_ext_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8mf4_i8m8(vint8mf4_t op1)
+{
+  vint8m8_t a = vlmul_ext_i8m8(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_ext_v_i8mf2_i8m1(vint8mf2_t op1)
+{
+  vint8m1_t a = vlmul_ext_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8mf2_i8m2(vint8mf2_t op1)
+{
+  vint8m2_t a = vlmul_ext_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8mf2_i8m4(vint8mf2_t op1)
+{
+  vint8m4_t a = vlmul_ext_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8mf2_i8m8(vint8mf2_t op1)
+{
+  vint8m8_t a = vlmul_ext_i8m8(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8m1_i8m2(vint8m1_t op1)
+{
+  vint8m2_t a = vlmul_ext_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8m1_i8m4(vint8m1_t op1)
+{
+  vint8m4_t a = vlmul_ext_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8m1_i8m8(vint8m1_t op1)
+{
+  vint8m8_t a = vlmul_ext_i8m8(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8m2_i8m4(vint8m2_t op1)
+{
+  vint8m4_t a = vlmul_ext_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8m2_i8m8(vint8m2_t op1)
+{
+  vint8m8_t a = vlmul_ext_i8m8(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8m4_i8m8(vint8m4_t op1)
+{
+  vint8m8_t a = vlmul_ext_i8m8(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_ext_v_i16mf4_i16mf2(vint16mf4_t op1)
+{
+  vint16mf2_t a = vlmul_ext_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_ext_v_i16mf4_i16m1(vint16mf4_t op1)
+{
+  vint16m1_t a = vlmul_ext_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_ext_v_i16mf4_i16m2(vint16mf4_t op1)
+{
+  vint16m2_t a = vlmul_ext_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16mf4_i16m4(vint16mf4_t op1)
+{
+  vint16m4_t a = vlmul_ext_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16mf4_i16m8(vint16mf4_t op1)
+{
+  vint16m8_t a = vlmul_ext_i16m8(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_ext_v_i16mf2_i16m1(vint16mf2_t op1)
+{
+  vint16m1_t a = vlmul_ext_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_ext_v_i16mf2_i16m2(vint16mf2_t op1)
+{
+  vint16m2_t a = vlmul_ext_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16mf2_i16m4(vint16mf2_t op1)
+{
+  vint16m4_t a = vlmul_ext_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16mf2_i16m8(vint16mf2_t op1)
+{
+  vint16m8_t a = vlmul_ext_i16m8(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_ext_v_i16m1_i16m2(vint16m1_t op1)
+{
+  vint16m2_t a = vlmul_ext_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16m1_i16m4(vint16m1_t op1)
+{
+  vint16m4_t a = vlmul_ext_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16m1_i16m8(vint16m1_t op1)
+{
+  vint16m8_t a = vlmul_ext_i16m8(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16m2_i16m4(vint16m2_t op1)
+{
+  vint16m4_t a = vlmul_ext_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16m2_i16m8(vint16m2_t op1)
+{
+  vint16m8_t a = vlmul_ext_i16m8(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16m4_i16m8(vint16m4_t op1)
+{
+  vint16m8_t a = vlmul_ext_i16m8(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_ext_v_i32mf2_i32m1(vint32mf2_t op1)
+{
+  vint32m1_t a = vlmul_ext_i32m1(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_ext_v_i32mf2_i32m2(vint32mf2_t op1)
+{
+  vint32m2_t a = vlmul_ext_i32m2(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_ext_v_i32mf2_i32m4(vint32mf2_t op1)
+{
+  vint32m4_t a = vlmul_ext_i32m4(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32mf2_i32m8(vint32mf2_t op1)
+{
+  vint32m8_t a = vlmul_ext_i32m8(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_ext_v_i32m1_i32m2(vint32m1_t op1)
+{
+  vint32m2_t a = vlmul_ext_i32m2(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_ext_v_i32m1_i32m4(vint32m1_t op1)
+{
+  vint32m4_t a = vlmul_ext_i32m4(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32m1_i32m8(vint32m1_t op1)
+{
+  vint32m8_t a = vlmul_ext_i32m8(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_ext_v_i32m2_i32m4(vint32m2_t op1)
+{
+  vint32m4_t a = vlmul_ext_i32m4(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32m2_i32m8(vint32m2_t op1)
+{
+  vint32m8_t a = vlmul_ext_i32m8(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32m4_i32m8(vint32m4_t op1)
+{
+  vint32m8_t a = vlmul_ext_i32m8(op1);
+  return a;
+}
+
+vint64m2_t test_vlmul_ext_v_i64m1_i64m2(vint64m1_t op1)
+{
+  vint64m2_t a = vlmul_ext_i64m2(op1);
+  return a;
+}
+
+vint64m4_t test_vlmul_ext_v_i64m1_i64m4(vint64m1_t op1)
+{
+  vint64m4_t a = vlmul_ext_i64m4(op1);
+  return a;
+}
+
+vint64m8_t test_vlmul_ext_v_i64m1_i64m8(vint64m1_t op1)
+{
+  vint64m8_t a = vlmul_ext_i64m8(op1);
+  return a;
+}
+
+vint64m4_t test_vlmul_ext_v_i64m2_i64m4(vint64m2_t op1)
+{
+  vint64m4_t a = vlmul_ext_i64m4(op1);
+  return a;
+}
+
+vint64m8_t test_vlmul_ext_v_i64m2_i64m8(vint64m2_t op1)
+{
+  vint64m8_t a = vlmul_ext_i64m8(op1);
+  return a;
+}
+
+vint64m8_t test_vlmul_ext_v_i64m4_i64m8(vint64m4_t op1)
+{
+  vint64m8_t a = vlmul_ext_i64m8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_ext_v_u8mf8_u8mf4(vuint8mf8_t op1)
+{
+  vuint8mf4_t a = vlmul_ext_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_ext_v_u8mf8_u8mf2(vuint8mf8_t op1)
+{
+  vuint8mf2_t a = vlmul_ext_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_ext_v_u8mf8_u8m1(vuint8mf8_t op1)
+{
+  vuint8m1_t a = vlmul_ext_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8mf8_u8m2(vuint8mf8_t op1)
+{
+  vuint8m2_t a = vlmul_ext_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8mf8_u8m4(vuint8mf8_t op1)
+{
+  vuint8m4_t a = vlmul_ext_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8mf8_u8m8(vuint8mf8_t op1)
+{
+  vuint8m8_t a = vlmul_ext_u8m8(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_ext_v_u8mf4_u8mf2(vuint8mf4_t op1)
+{
+  vuint8mf2_t a = vlmul_ext_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_ext_v_u8mf4_u8m1(vuint8mf4_t op1)
+{
+  vuint8m1_t a = vlmul_ext_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8mf4_u8m2(vuint8mf4_t op1)
+{
+  vuint8m2_t a = vlmul_ext_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8mf4_u8m4(vuint8mf4_t op1)
+{
+  vuint8m4_t a = vlmul_ext_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8mf4_u8m8(vuint8mf4_t op1)
+{
+  vuint8m8_t a = vlmul_ext_u8m8(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_ext_v_u8mf2_u8m1(vuint8mf2_t op1)
+{
+  vuint8m1_t a = vlmul_ext_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8mf2_u8m2(vuint8mf2_t op1)
+{
+  vuint8m2_t a = vlmul_ext_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8mf2_u8m4(vuint8mf2_t op1)
+{
+  vuint8m4_t a = vlmul_ext_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8mf2_u8m8(vuint8mf2_t op1)
+{
+  vuint8m8_t a = vlmul_ext_u8m8(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8m1_u8m2(vuint8m1_t op1)
+{
+  vuint8m2_t a = vlmul_ext_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8m1_u8m4(vuint8m1_t op1)
+{
+  vuint8m4_t a = vlmul_ext_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8m1_u8m8(vuint8m1_t op1)
+{
+  vuint8m8_t a = vlmul_ext_u8m8(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8m2_u8m4(vuint8m2_t op1)
+{
+  vuint8m4_t a = vlmul_ext_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8m2_u8m8(vuint8m2_t op1)
+{
+  vuint8m8_t a = vlmul_ext_u8m8(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8m4_u8m8(vuint8m4_t op1)
+{
+  vuint8m8_t a = vlmul_ext_u8m8(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_ext_v_u16mf4_u16mf2(vuint16mf4_t op1)
+{
+  vuint16mf2_t a = vlmul_ext_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_ext_v_u16mf4_u16m1(vuint16mf4_t op1)
+{
+  vuint16m1_t a = vlmul_ext_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_ext_v_u16mf4_u16m2(vuint16mf4_t op1)
+{
+  vuint16m2_t a = vlmul_ext_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16mf4_u16m4(vuint16mf4_t op1)
+{
+  vuint16m4_t a = vlmul_ext_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16mf4_u16m8(vuint16mf4_t op1)
+{
+  vuint16m8_t a = vlmul_ext_u16m8(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_ext_v_u16mf2_u16m1(vuint16mf2_t op1)
+{
+  vuint16m1_t a = vlmul_ext_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_ext_v_u16mf2_u16m2(vuint16mf2_t op1)
+{
+  vuint16m2_t a = vlmul_ext_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16mf2_u16m4(vuint16mf2_t op1)
+{
+  vuint16m4_t a = vlmul_ext_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16mf2_u16m8(vuint16mf2_t op1)
+{
+  vuint16m8_t a = vlmul_ext_u16m8(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_ext_v_u16m1_u16m2(vuint16m1_t op1)
+{
+  vuint16m2_t a = vlmul_ext_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16m1_u16m4(vuint16m1_t op1)
+{
+  vuint16m4_t a = vlmul_ext_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16m1_u16m8(vuint16m1_t op1)
+{
+  vuint16m8_t a = vlmul_ext_u16m8(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16m2_u16m4(vuint16m2_t op1)
+{
+  vuint16m4_t a = vlmul_ext_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16m2_u16m8(vuint16m2_t op1)
+{
+  vuint16m8_t a = vlmul_ext_u16m8(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16m4_u16m8(vuint16m4_t op1)
+{
+  vuint16m8_t a = vlmul_ext_u16m8(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_ext_v_u32mf2_u32m1(vuint32mf2_t op1)
+{
+  vuint32m1_t a = vlmul_ext_u32m1(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_ext_v_u32mf2_u32m2(vuint32mf2_t op1)
+{
+  vuint32m2_t a = vlmul_ext_u32m2(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_ext_v_u32mf2_u32m4(vuint32mf2_t op1)
+{
+  vuint32m4_t a = vlmul_ext_u32m4(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32mf2_u32m8(vuint32mf2_t op1)
+{
+  vuint32m8_t a = vlmul_ext_u32m8(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_ext_v_u32m1_u32m2(vuint32m1_t op1)
+{
+  vuint32m2_t a = vlmul_ext_u32m2(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_ext_v_u32m1_u32m4(vuint32m1_t op1)
+{
+  vuint32m4_t a = vlmul_ext_u32m4(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32m1_u32m8(vuint32m1_t op1)
+{
+  vuint32m8_t a = vlmul_ext_u32m8(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_ext_v_u32m2_u32m4(vuint32m2_t op1)
+{
+  vuint32m4_t a = vlmul_ext_u32m4(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32m2_u32m8(vuint32m2_t op1)
+{
+  vuint32m8_t a = vlmul_ext_u32m8(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32m4_u32m8(vuint32m4_t op1)
+{
+  vuint32m8_t a = vlmul_ext_u32m8(op1);
+  return a;
+}
+
+vuint64m2_t test_vlmul_ext_v_u64m1_u64m2(vuint64m1_t op1)
+{
+  vuint64m2_t a = vlmul_ext_u64m2(op1);
+  return a;
+}
+
+vuint64m4_t test_vlmul_ext_v_u64m1_u64m4(vuint64m1_t op1)
+{
+  vuint64m4_t a = vlmul_ext_u64m4(op1);
+  return a;
+}
+
+vuint64m8_t test_vlmul_ext_v_u64m1_u64m8(vuint64m1_t op1)
+{
+  vuint64m8_t a = vlmul_ext_u64m8(op1);
+  return a;
+}
+
+vuint64m4_t test_vlmul_ext_v_u64m2_u64m4(vuint64m2_t op1)
+{
+  vuint64m4_t a = vlmul_ext_u64m4(op1);
+  return a;
+}
+
+vuint64m8_t test_vlmul_ext_v_u64m2_u64m8(vuint64m2_t op1)
+{
+  vuint64m8_t a = vlmul_ext_u64m8(op1);
+  return a;
+}
+
+vuint64m8_t test_vlmul_ext_v_u64m4_u64m8(vuint64m4_t op1)
+{
+  vuint64m8_t a = vlmul_ext_u64m8(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_ext_v_f32mf2_f32m1(vfloat32mf2_t op1)
+{
+  vfloat32m1_t a = vlmul_ext_f32m1(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_ext_v_f32mf2_f32m2(vfloat32mf2_t op1)
+{
+  vfloat32m2_t a = vlmul_ext_f32m2(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_ext_v_f32mf2_f32m4(vfloat32mf2_t op1)
+{
+  vfloat32m4_t a = vlmul_ext_f32m4(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32mf2_f32m8(vfloat32mf2_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_f32m8(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_ext_v_f32m1_f32m2(vfloat32m1_t op1)
+{
+  vfloat32m2_t a = vlmul_ext_f32m2(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_ext_v_f32m1_f32m4(vfloat32m1_t op1)
+{
+  vfloat32m4_t a = vlmul_ext_f32m4(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32m1_f32m8(vfloat32m1_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_f32m8(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_ext_v_f32m2_f32m4(vfloat32m2_t op1)
+{
+  vfloat32m4_t a = vlmul_ext_f32m4(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32m2_f32m8(vfloat32m2_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_f32m8(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32m4_f32m8(vfloat32m4_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_f32m8(op1);
+  return a;
+}
+
+vfloat64m2_t test_vlmul_ext_v_f64m1_f64m2(vfloat64m1_t op1)
+{
+  vfloat64m2_t a = vlmul_ext_f64m2(op1);
+  return a;
+}
+
+vfloat64m4_t test_vlmul_ext_v_f64m1_f64m4(vfloat64m1_t op1)
+{
+  vfloat64m4_t a = vlmul_ext_f64m4(op1);
+  return a;
+}
+
+vfloat64m8_t test_vlmul_ext_v_f64m1_f64m8(vfloat64m1_t op1)
+{
+  vfloat64m8_t a = vlmul_ext_f64m8(op1);
+  return a;
+}
+
+vfloat64m4_t test_vlmul_ext_v_f64m2_f64m4(vfloat64m2_t op1)
+{
+  vfloat64m4_t a = vlmul_ext_f64m4(op1);
+  return a;
+}
+
+vfloat64m8_t test_vlmul_ext_v_f64m2_f64m8(vfloat64m2_t op1)
+{
+  vfloat64m8_t a = vlmul_ext_f64m8(op1);
+  return a;
+}
+
+vfloat64m8_t test_vlmul_ext_v_f64m4_f64m8(vfloat64m4_t op1)
+{
+  vfloat64m8_t a = vlmul_ext_f64m8(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8mf4_i8mf8(vint8mf4_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_i8mf8(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8mf2_i8mf8(vint8mf2_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8mf2_i8mf4(vint8mf2_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_i8mf4(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m1_i8mf8(vint8m1_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m1_i8mf4(vint8m1_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m1_i8mf2(vint8m1_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_i8mf2(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m2_i8mf8(vint8m2_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m2_i8mf4(vint8m2_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m2_i8mf2(vint8m2_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_trunc_v_i8m2_i8m1(vint8m2_t op1)
+{
+  vint8m1_t a = vlmul_trunc_i8m1(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m4_i8mf8(vint8m4_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m4_i8mf4(vint8m4_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m4_i8mf2(vint8m4_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_trunc_v_i8m4_i8m1(vint8m4_t op1)
+{
+  vint8m1_t a = vlmul_trunc_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_trunc_v_i8m4_i8m2(vint8m4_t op1)
+{
+  vint8m2_t a = vlmul_trunc_i8m2(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m8_i8mf8(vint8m8_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m8_i8mf4(vint8m8_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m8_i8mf2(vint8m8_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_trunc_v_i8m8_i8m1(vint8m8_t op1)
+{
+  vint8m1_t a = vlmul_trunc_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_trunc_v_i8m8_i8m2(vint8m8_t op1)
+{
+  vint8m2_t a = vlmul_trunc_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_trunc_v_i8m8_i8m4(vint8m8_t op1)
+{
+  vint8m4_t a = vlmul_trunc_i8m4(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16mf2_i16mf4(vint16mf2_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_i16mf4(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m1_i16mf4(vint16m1_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m1_i16mf2(vint16m1_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_i16mf2(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m2_i16mf4(vint16m2_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m2_i16mf2(vint16m2_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_trunc_v_i16m2_i16m1(vint16m2_t op1)
+{
+  vint16m1_t a = vlmul_trunc_i16m1(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m4_i16mf4(vint16m4_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m4_i16mf2(vint16m4_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_trunc_v_i16m4_i16m1(vint16m4_t op1)
+{
+  vint16m1_t a = vlmul_trunc_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_trunc_v_i16m4_i16m2(vint16m4_t op1)
+{
+  vint16m2_t a = vlmul_trunc_i16m2(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m8_i16mf4(vint16m8_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m8_i16mf2(vint16m8_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_trunc_v_i16m8_i16m1(vint16m8_t op1)
+{
+  vint16m1_t a = vlmul_trunc_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_trunc_v_i16m8_i16m2(vint16m8_t op1)
+{
+  vint16m2_t a = vlmul_trunc_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_trunc_v_i16m8_i16m4(vint16m8_t op1)
+{
+  vint16m4_t a = vlmul_trunc_i16m4(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m1_i32mf2(vint32m1_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_i32mf2(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m2_i32mf2(vint32m2_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_i32mf2(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_trunc_v_i32m2_i32m1(vint32m2_t op1)
+{
+  vint32m1_t a = vlmul_trunc_i32m1(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m4_i32mf2(vint32m4_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_i32mf2(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_trunc_v_i32m4_i32m1(vint32m4_t op1)
+{
+  vint32m1_t a = vlmul_trunc_i32m1(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_trunc_v_i32m4_i32m2(vint32m4_t op1)
+{
+  vint32m2_t a = vlmul_trunc_i32m2(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m8_i32mf2(vint32m8_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_i32mf2(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_trunc_v_i32m8_i32m1(vint32m8_t op1)
+{
+  vint32m1_t a = vlmul_trunc_i32m1(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_trunc_v_i32m8_i32m2(vint32m8_t op1)
+{
+  vint32m2_t a = vlmul_trunc_i32m2(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_trunc_v_i32m8_i32m4(vint32m8_t op1)
+{
+  vint32m4_t a = vlmul_trunc_i32m4(op1);
+  return a;
+}
+
+vint64m1_t test_vlmul_trunc_v_i64m2_i64m1(vint64m2_t op1)
+{
+  vint64m1_t a = vlmul_trunc_i64m1(op1);
+  return a;
+}
+
+vint64m1_t test_vlmul_trunc_v_i64m4_i64m1(vint64m4_t op1)
+{
+  vint64m1_t a = vlmul_trunc_i64m1(op1);
+  return a;
+}
+
+vint64m2_t test_vlmul_trunc_v_i64m4_i64m2(vint64m4_t op1)
+{
+  vint64m2_t a = vlmul_trunc_i64m2(op1);
+  return a;
+}
+
+vint64m1_t test_vlmul_trunc_v_i64m8_i64m1(vint64m8_t op1)
+{
+  vint64m1_t a = vlmul_trunc_i64m1(op1);
+  return a;
+}
+
+vint64m2_t test_vlmul_trunc_v_i64m8_i64m2(vint64m8_t op1)
+{
+  vint64m2_t a = vlmul_trunc_i64m2(op1);
+  return a;
+}
+
+vint64m4_t test_vlmul_trunc_v_i64m8_i64m4(vint64m8_t op1)
+{
+  vint64m4_t a = vlmul_trunc_i64m4(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8mf4_u8mf8(vuint8mf4_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_u8mf8(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8mf2_u8mf8(vuint8mf2_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8mf2_u8mf4(vuint8mf2_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_u8mf4(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m1_u8mf8(vuint8m1_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m1_u8mf4(vuint8m1_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m1_u8mf2(vuint8m1_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_u8mf2(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m2_u8mf8(vuint8m2_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m2_u8mf4(vuint8m2_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m2_u8mf2(vuint8m2_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_trunc_v_u8m2_u8m1(vuint8m2_t op1)
+{
+  vuint8m1_t a = vlmul_trunc_u8m1(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m4_u8mf8(vuint8m4_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m4_u8mf4(vuint8m4_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m4_u8mf2(vuint8m4_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_trunc_v_u8m4_u8m1(vuint8m4_t op1)
+{
+  vuint8m1_t a = vlmul_trunc_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_trunc_v_u8m4_u8m2(vuint8m4_t op1)
+{
+  vuint8m2_t a = vlmul_trunc_u8m2(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m8_u8mf8(vuint8m8_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m8_u8mf4(vuint8m8_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m8_u8mf2(vuint8m8_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_trunc_v_u8m8_u8m1(vuint8m8_t op1)
+{
+  vuint8m1_t a = vlmul_trunc_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_trunc_v_u8m8_u8m2(vuint8m8_t op1)
+{
+  vuint8m2_t a = vlmul_trunc_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_trunc_v_u8m8_u8m4(vuint8m8_t op1)
+{
+  vuint8m4_t a = vlmul_trunc_u8m4(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16mf2_u16mf4(vuint16mf2_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_u16mf4(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m1_u16mf4(vuint16m1_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m1_u16mf2(vuint16m1_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_u16mf2(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m2_u16mf4(vuint16m2_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m2_u16mf2(vuint16m2_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_trunc_v_u16m2_u16m1(vuint16m2_t op1)
+{
+  vuint16m1_t a = vlmul_trunc_u16m1(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m4_u16mf4(vuint16m4_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m4_u16mf2(vuint16m4_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_trunc_v_u16m4_u16m1(vuint16m4_t op1)
+{
+  vuint16m1_t a = vlmul_trunc_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_trunc_v_u16m4_u16m2(vuint16m4_t op1)
+{
+  vuint16m2_t a = vlmul_trunc_u16m2(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m8_u16mf4(vuint16m8_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m8_u16mf2(vuint16m8_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_trunc_v_u16m8_u16m1(vuint16m8_t op1)
+{
+  vuint16m1_t a = vlmul_trunc_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_trunc_v_u16m8_u16m2(vuint16m8_t op1)
+{
+  vuint16m2_t a = vlmul_trunc_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_trunc_v_u16m8_u16m4(vuint16m8_t op1)
+{
+  vuint16m4_t a = vlmul_trunc_u16m4(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m1_u32mf2(vuint32m1_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_u32mf2(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m2_u32mf2(vuint32m2_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_u32mf2(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_trunc_v_u32m2_u32m1(vuint32m2_t op1)
+{
+  vuint32m1_t a = vlmul_trunc_u32m1(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m4_u32mf2(vuint32m4_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_u32mf2(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_trunc_v_u32m4_u32m1(vuint32m4_t op1)
+{
+  vuint32m1_t a = vlmul_trunc_u32m1(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_trunc_v_u32m4_u32m2(vuint32m4_t op1)
+{
+  vuint32m2_t a = vlmul_trunc_u32m2(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m8_u32mf2(vuint32m8_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_u32mf2(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_trunc_v_u32m8_u32m1(vuint32m8_t op1)
+{
+  vuint32m1_t a = vlmul_trunc_u32m1(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_trunc_v_u32m8_u32m2(vuint32m8_t op1)
+{
+  vuint32m2_t a = vlmul_trunc_u32m2(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_trunc_v_u32m8_u32m4(vuint32m8_t op1)
+{
+  vuint32m4_t a = vlmul_trunc_u32m4(op1);
+  return a;
+}
+
+vuint64m1_t test_vlmul_trunc_v_u64m2_u64m1(vuint64m2_t op1)
+{
+  vuint64m1_t a = vlmul_trunc_u64m1(op1);
+  return a;
+}
+
+vuint64m1_t test_vlmul_trunc_v_u64m4_u64m1(vuint64m4_t op1)
+{
+  vuint64m1_t a = vlmul_trunc_u64m1(op1);
+  return a;
+}
+
+vuint64m2_t test_vlmul_trunc_v_u64m4_u64m2(vuint64m4_t op1)
+{
+  vuint64m2_t a = vlmul_trunc_u64m2(op1);
+  return a;
+}
+
+vuint64m1_t test_vlmul_trunc_v_u64m8_u64m1(vuint64m8_t op1)
+{
+  vuint64m1_t a = vlmul_trunc_u64m1(op1);
+  return a;
+}
+
+vuint64m2_t test_vlmul_trunc_v_u64m8_u64m2(vuint64m8_t op1)
+{
+  vuint64m2_t a = vlmul_trunc_u64m2(op1);
+  return a;
+}
+
+vuint64m4_t test_vlmul_trunc_v_u64m8_u64m4(vuint64m8_t op1)
+{
+  vuint64m4_t a = vlmul_trunc_u64m4(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m1_f32mf2(vfloat32m1_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_f32mf2(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m2_f32mf2(vfloat32m2_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_f32mf2(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_trunc_v_f32m2_f32m1(vfloat32m2_t op1)
+{
+  vfloat32m1_t a = vlmul_trunc_f32m1(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m4_f32mf2(vfloat32m4_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_f32mf2(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_trunc_v_f32m4_f32m1(vfloat32m4_t op1)
+{
+  vfloat32m1_t a = vlmul_trunc_f32m1(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_trunc_v_f32m4_f32m2(vfloat32m4_t op1)
+{
+  vfloat32m2_t a = vlmul_trunc_f32m2(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m8_f32mf2(vfloat32m8_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_f32mf2(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_trunc_v_f32m8_f32m1(vfloat32m8_t op1)
+{
+  vfloat32m1_t a = vlmul_trunc_f32m1(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_trunc_v_f32m8_f32m2(vfloat32m8_t op1)
+{
+  vfloat32m2_t a = vlmul_trunc_f32m2(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_trunc_v_f32m8_f32m4(vfloat32m8_t op1)
+{
+  vfloat32m4_t a = vlmul_trunc_f32m4(op1);
+  return a;
+}
+
+vfloat64m1_t test_vlmul_trunc_v_f64m2_f64m1(vfloat64m2_t op1)
+{
+  vfloat64m1_t a = vlmul_trunc_f64m1(op1);
+  return a;
+}
+
+vfloat64m1_t test_vlmul_trunc_v_f64m4_f64m1(vfloat64m4_t op1)
+{
+  vfloat64m1_t a = vlmul_trunc_f64m1(op1);
+  return a;
+}
+
+vfloat64m2_t test_vlmul_trunc_v_f64m4_f64m2(vfloat64m4_t op1)
+{
+  vfloat64m2_t a = vlmul_trunc_f64m2(op1);
+  return a;
+}
+
+vfloat64m1_t test_vlmul_trunc_v_f64m8_f64m1(vfloat64m8_t op1)
+{
+  vfloat64m1_t a = vlmul_trunc_f64m1(op1);
+  return a;
+}
+
+vfloat64m2_t test_vlmul_trunc_v_f64m8_f64m2(vfloat64m8_t op1)
+{
+  vfloat64m2_t a = vlmul_trunc_f64m2(op1);
+  return a;
+}
+
+vfloat64m4_t test_vlmul_trunc_v_f64m8_f64m4(vfloat64m8_t op1)
+{
+  vfloat64m4_t a = vlmul_trunc_f64m4(op1);
+  return a;
+}
\ No newline at end of file
diff --git a/gcc/testsuite/g++.target/riscv/rvv/rvv-intrinsic.exp b/gcc/testsuite/g++.target/riscv/rvv/rvv-intrinsic.exp
new file mode 100644
index 00000000000..c8db25f0fbd
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/rvv-intrinsic.exp
@@ -0,0 +1,39 @@
+# Copyright (C) 2022-2022 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Test the front-end for C++.
+# We don't need to test back-end code-gen in RV32 system for C++
+# Because it is already tested in C.
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv64*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib g++-dg.exp
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+set CFLAGS "-march=rv64gcv_zfh -O3"
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.C]] \
+	"" $CFLAGS
+
+# All done.
+dg-finish
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/misc_func.c b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/misc_func.c
new file mode 100644
index 00000000000..387da4205f9
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/misc_func.c
@@ -0,0 +1,2921 @@
+
+/* { dg-do compile } */
+/* { dg-skip-if "test vector intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
+
+#include <stddef.h>
+#include <riscv_vector.h>
+
+
+/* Reinterpret between different type under the same SEW and LMUL */
+
+vuint8mf8_t test_vreinterpret_v_i8mf8_u8mf8(vint8mf8_t src)
+{
+  vuint8mf8_t a = vreinterpret_v_i8mf8_u8mf8(src);
+  return a;
+}
+
+vuint8mf4_t test_vreinterpret_v_i8mf4_u8mf4(vint8mf4_t src)
+{
+  vuint8mf4_t a = vreinterpret_v_i8mf4_u8mf4(src);
+  return a;
+}
+
+vuint8mf2_t test_vreinterpret_v_i8mf2_u8mf2(vint8mf2_t src)
+{
+  vuint8mf2_t a = vreinterpret_v_i8mf2_u8mf2(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_i8m1_u8m1(vint8m1_t src)
+{
+  vuint8m1_t a = vreinterpret_v_i8m1_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_i8m2_u8m2(vint8m2_t src)
+{
+  vuint8m2_t a = vreinterpret_v_i8m2_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_i8m4_u8m4(vint8m4_t src)
+{
+  vuint8m4_t a = vreinterpret_v_i8m4_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_i8m8_u8m8(vint8m8_t src)
+{
+  vuint8m8_t a = vreinterpret_v_i8m8_u8m8(src);
+  return a;
+}
+
+vint8mf8_t test_vreinterpret_v_u8mf8_i8mf8(vuint8mf8_t src)
+{
+  vint8mf8_t a = vreinterpret_v_u8mf8_i8mf8(src);
+  return a;
+}
+
+vint8mf4_t test_vreinterpret_v_u8mf4_i8mf4(vuint8mf4_t src)
+{
+  vint8mf4_t a = vreinterpret_v_u8mf4_i8mf4(src);
+  return a;
+}
+
+vint8mf2_t test_vreinterpret_v_u8mf2_i8mf2(vuint8mf2_t src)
+{
+  vint8mf2_t a = vreinterpret_v_u8mf2_i8mf2(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_u8m1_i8m1(vuint8m1_t src)
+{
+  vint8m1_t a = vreinterpret_v_u8m1_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_u8m2_i8m2(vuint8m2_t src)
+{
+  vint8m2_t a = vreinterpret_v_u8m2_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_u8m4_i8m4(vuint8m4_t src)
+{
+  vint8m4_t a = vreinterpret_v_u8m4_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_u8m8_i8m8(vuint8m8_t src)
+{
+  vint8m8_t a = vreinterpret_v_u8m8_i8m8(src);
+  return a;
+}
+
+vuint16mf4_t test_vreinterpret_v_i16mf4_u16mf4(vint16mf4_t src)
+{
+  vuint16mf4_t a = vreinterpret_v_i16mf4_u16mf4(src);
+  return a;
+}
+
+vuint16mf2_t test_vreinterpret_v_i16mf2_u16mf2(vint16mf2_t src)
+{
+  vuint16mf2_t a = vreinterpret_v_i16mf2_u16mf2(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_i16m1_u16m1(vint16m1_t src)
+{
+  vuint16m1_t a = vreinterpret_v_i16m1_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_i16m2_u16m2(vint16m2_t src)
+{
+  vuint16m2_t a = vreinterpret_v_i16m2_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_i16m4_u16m4(vint16m4_t src)
+{
+  vuint16m4_t a = vreinterpret_v_i16m4_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_i16m8_u16m8(vint16m8_t src)
+{
+  vuint16m8_t a = vreinterpret_v_i16m8_u16m8(src);
+  return a;
+}
+
+vint16mf4_t test_vreinterpret_v_u16mf4_i16mf4(vuint16mf4_t src)
+{
+  vint16mf4_t a = vreinterpret_v_u16mf4_i16mf4(src);
+  return a;
+}
+
+vint16mf2_t test_vreinterpret_v_u16mf2_i16mf2(vuint16mf2_t src)
+{
+  vint16mf2_t a = vreinterpret_v_u16mf2_i16mf2(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_u16m1_i16m1(vuint16m1_t src)
+{
+  vint16m1_t a = vreinterpret_v_u16m1_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_u16m2_i16m2(vuint16m2_t src)
+{
+  vint16m2_t a = vreinterpret_v_u16m2_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_u16m4_i16m4(vuint16m4_t src)
+{
+  vint16m4_t a = vreinterpret_v_u16m4_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_u16m8_i16m8(vuint16m8_t src)
+{
+  vint16m8_t a = vreinterpret_v_u16m8_i16m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_i32mf2_u32mf2(vint32mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_v_i32mf2_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_i32m1_u32m1(vint32m1_t src)
+{
+  vuint32m1_t a = vreinterpret_v_i32m1_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_i32m2_u32m2(vint32m2_t src)
+{
+  vuint32m2_t a = vreinterpret_v_i32m2_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_i32m4_u32m4(vint32m4_t src)
+{
+  vuint32m4_t a = vreinterpret_v_i32m4_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_i32m8_u32m8(vint32m8_t src)
+{
+  vuint32m8_t a = vreinterpret_v_i32m8_u32m8(src);
+  return a;
+}
+
+vfloat32mf2_t test_vreinterpret_v_i32mf2_f32mf2(vint32mf2_t src)
+{
+  vfloat32mf2_t a = vreinterpret_v_i32mf2_f32mf2(src);
+  return a;
+}
+
+vfloat32m1_t test_vreinterpret_v_i32m1_f32m1(vint32m1_t src)
+{
+  vfloat32m1_t a = vreinterpret_v_i32m1_f32m1(src);
+  return a;
+}
+
+vfloat32m2_t test_vreinterpret_v_i32m2_f32m2(vint32m2_t src)
+{
+  vfloat32m2_t a = vreinterpret_v_i32m2_f32m2(src);
+  return a;
+}
+
+vfloat32m4_t test_vreinterpret_v_i32m4_f32m4(vint32m4_t src)
+{
+  vfloat32m4_t a = vreinterpret_v_i32m4_f32m4(src);
+  return a;
+}
+
+vfloat32m8_t test_vreinterpret_v_i32m8_f32m8(vint32m8_t src)
+{
+  vfloat32m8_t a = vreinterpret_v_i32m8_f32m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_u32mf2_i32mf2(vuint32mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_v_u32mf2_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_u32m1_i32m1(vuint32m1_t src)
+{
+  vint32m1_t a = vreinterpret_v_u32m1_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_u32m2_i32m2(vuint32m2_t src)
+{
+  vint32m2_t a = vreinterpret_v_u32m2_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_u32m4_i32m4(vuint32m4_t src)
+{
+  vint32m4_t a = vreinterpret_v_u32m4_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_u32m8_i32m8(vuint32m8_t src)
+{
+  vint32m8_t a = vreinterpret_v_u32m8_i32m8(src);
+  return a;
+}
+
+vfloat32mf2_t test_vreinterpret_v_u32mf2_f32mf2(vuint32mf2_t src)
+{
+  vfloat32mf2_t a = vreinterpret_v_u32mf2_f32mf2(src);
+  return a;
+}
+
+vfloat32m1_t test_vreinterpret_v_u32m1_f32m1(vuint32m1_t src)
+{
+  vfloat32m1_t a = vreinterpret_v_u32m1_f32m1(src);
+  return a;
+}
+
+vfloat32m2_t test_vreinterpret_v_u32m2_f32m2(vuint32m2_t src)
+{
+  vfloat32m2_t a = vreinterpret_v_u32m2_f32m2(src);
+  return a;
+}
+
+vfloat32m4_t test_vreinterpret_v_u32m4_f32m4(vuint32m4_t src)
+{
+  vfloat32m4_t a = vreinterpret_v_u32m4_f32m4(src);
+  return a;
+}
+
+vfloat32m8_t test_vreinterpret_v_u32m8_f32m8(vuint32m8_t src)
+{
+  vfloat32m8_t a = vreinterpret_v_u32m8_f32m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_f32mf2_i32mf2(vfloat32mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_v_f32mf2_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_f32m1_i32m1(vfloat32m1_t src)
+{
+  vint32m1_t a = vreinterpret_v_f32m1_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_f32m2_i32m2(vfloat32m2_t src)
+{
+  vint32m2_t a = vreinterpret_v_f32m2_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_f32m4_i32m4(vfloat32m4_t src)
+{
+  vint32m4_t a = vreinterpret_v_f32m4_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_f32m8_i32m8(vfloat32m8_t src)
+{
+  vint32m8_t a = vreinterpret_v_f32m8_i32m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_f32mf2_u32mf2(vfloat32mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_v_f32mf2_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_f32m1_u32m1(vfloat32m1_t src)
+{
+  vuint32m1_t a = vreinterpret_v_f32m1_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_f32m2_u32m2(vfloat32m2_t src)
+{
+  vuint32m2_t a = vreinterpret_v_f32m2_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_f32m4_u32m4(vfloat32m4_t src)
+{
+  vuint32m4_t a = vreinterpret_v_f32m4_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_f32m8_u32m8(vfloat32m8_t src)
+{
+  vuint32m8_t a = vreinterpret_v_f32m8_u32m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_i64m1_u64m1(vint64m1_t src)
+{
+  vuint64m1_t a = vreinterpret_v_i64m1_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_i64m2_u64m2(vint64m2_t src)
+{
+  vuint64m2_t a = vreinterpret_v_i64m2_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_i64m4_u64m4(vint64m4_t src)
+{
+  vuint64m4_t a = vreinterpret_v_i64m4_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_i64m8_u64m8(vint64m8_t src)
+{
+  vuint64m8_t a = vreinterpret_v_i64m8_u64m8(src);
+  return a;
+}
+
+vfloat64m1_t test_vreinterpret_v_i64m1_f64m1(vint64m1_t src)
+{
+  vfloat64m1_t a = vreinterpret_v_i64m1_f64m1(src);
+  return a;
+}
+
+vfloat64m2_t test_vreinterpret_v_i64m2_f64m2(vint64m2_t src)
+{
+  vfloat64m2_t a = vreinterpret_v_i64m2_f64m2(src);
+  return a;
+}
+
+vfloat64m4_t test_vreinterpret_v_i64m4_f64m4(vint64m4_t src)
+{
+  vfloat64m4_t a = vreinterpret_v_i64m4_f64m4(src);
+  return a;
+}
+
+vfloat64m8_t test_vreinterpret_v_i64m8_f64m8(vint64m8_t src)
+{
+  vfloat64m8_t a = vreinterpret_v_i64m8_f64m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_u64m1_i64m1(vuint64m1_t src)
+{
+  vint64m1_t a = vreinterpret_v_u64m1_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_u64m2_i64m2(vuint64m2_t src)
+{
+  vint64m2_t a = vreinterpret_v_u64m2_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_u64m4_i64m4(vuint64m4_t src)
+{
+  vint64m4_t a = vreinterpret_v_u64m4_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_u64m8_i64m8(vuint64m8_t src)
+{
+  vint64m8_t a = vreinterpret_v_u64m8_i64m8(src);
+  return a;
+}
+
+vfloat64m1_t test_vreinterpret_v_u64m1_f64m1(vuint64m1_t src)
+{
+  vfloat64m1_t a = vreinterpret_v_u64m1_f64m1(src);
+  return a;
+}
+
+vfloat64m2_t test_vreinterpret_v_u64m2_f64m2(vuint64m2_t src)
+{
+  vfloat64m2_t a = vreinterpret_v_u64m2_f64m2(src);
+  return a;
+}
+
+vfloat64m4_t test_vreinterpret_v_u64m4_f64m4(vuint64m4_t src)
+{
+  vfloat64m4_t a = vreinterpret_v_u64m4_f64m4(src);
+  return a;
+}
+
+vfloat64m8_t test_vreinterpret_v_u64m8_f64m8(vuint64m8_t src)
+{
+  vfloat64m8_t a = vreinterpret_v_u64m8_f64m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_f64m1_i64m1(vfloat64m1_t src)
+{
+  vint64m1_t a = vreinterpret_v_f64m1_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_f64m2_i64m2(vfloat64m2_t src)
+{
+  vint64m2_t a = vreinterpret_v_f64m2_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_f64m4_i64m4(vfloat64m4_t src)
+{
+  vint64m4_t a = vreinterpret_v_f64m4_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_f64m8_i64m8(vfloat64m8_t src)
+{
+  vint64m8_t a = vreinterpret_v_f64m8_i64m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_f64m1_u64m1(vfloat64m1_t src)
+{
+  vuint64m1_t a = vreinterpret_v_f64m1_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_f64m2_u64m2(vfloat64m2_t src)
+{
+  vuint64m2_t a = vreinterpret_v_f64m2_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_f64m4_u64m4(vfloat64m4_t src)
+{
+  vuint64m4_t a = vreinterpret_v_f64m4_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_f64m8_u64m8(vfloat64m8_t src)
+{
+  vuint64m8_t a = vreinterpret_v_f64m8_u64m8(src);
+  return a;
+}
+
+/* Reinterpret between different SEW under the same LMUL */
+
+vint16mf4_t test_vreinterpret_v_i8mf4_i16mf4(vint8mf4_t src)
+{
+  vint16mf4_t a = vreinterpret_v_i8mf4_i16mf4(src);
+  return a;
+}
+
+vint16mf2_t test_vreinterpret_v_i8mf2_i16mf2(vint8mf2_t src)
+{
+  vint16mf2_t a = vreinterpret_v_i8mf2_i16mf2(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_i8m1_i16m1(vint8m1_t src)
+{
+  vint16m1_t a = vreinterpret_v_i8m1_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_i8m2_i16m2(vint8m2_t src)
+{
+  vint16m2_t a = vreinterpret_v_i8m2_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_i8m4_i16m4(vint8m4_t src)
+{
+  vint16m4_t a = vreinterpret_v_i8m4_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_i8m8_i16m8(vint8m8_t src)
+{
+  vint16m8_t a = vreinterpret_v_i8m8_i16m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_i8mf2_i32mf2(vint8mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_v_i8mf2_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_i8m1_i32m1(vint8m1_t src)
+{
+  vint32m1_t a = vreinterpret_v_i8m1_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_i8m2_i32m2(vint8m2_t src)
+{
+  vint32m2_t a = vreinterpret_v_i8m2_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_i8m4_i32m4(vint8m4_t src)
+{
+  vint32m4_t a = vreinterpret_v_i8m4_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_i8m8_i32m8(vint8m8_t src)
+{
+  vint32m8_t a = vreinterpret_v_i8m8_i32m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_i8m1_i64m1(vint8m1_t src)
+{
+  vint64m1_t a = vreinterpret_v_i8m1_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_i8m2_i64m2(vint8m2_t src)
+{
+  vint64m2_t a = vreinterpret_v_i8m2_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_i8m4_i64m4(vint8m4_t src)
+{
+  vint64m4_t a = vreinterpret_v_i8m4_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_i8m8_i64m8(vint8m8_t src)
+{
+  vint64m8_t a = vreinterpret_v_i8m8_i64m8(src);
+  return a;
+}
+
+vint8mf4_t test_vreinterpret_v_i16mf4_i8mf4(vint16mf4_t src)
+{
+  vint8mf4_t a = vreinterpret_v_i16mf4_i8mf4(src);
+  return a;
+}
+
+vint8mf2_t test_vreinterpret_v_i16mf2_i8mf2(vint16mf2_t src)
+{
+  vint8mf2_t a = vreinterpret_v_i16mf2_i8mf2(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_i16m1_i8m1(vint16m1_t src)
+{
+  vint8m1_t a = vreinterpret_v_i16m1_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_i16m2_i8m2(vint16m2_t src)
+{
+  vint8m2_t a = vreinterpret_v_i16m2_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_i16m4_i8m4(vint16m4_t src)
+{
+  vint8m4_t a = vreinterpret_v_i16m4_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_i16m8_i8m8(vint16m8_t src)
+{
+  vint8m8_t a = vreinterpret_v_i16m8_i8m8(src);
+  return a;
+}
+
+vint32mf2_t test_vreinterpret_v_i16mf2_i32mf2(vint16mf2_t src)
+{
+  vint32mf2_t a = vreinterpret_v_i16mf2_i32mf2(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_i16m1_i32m1(vint16m1_t src)
+{
+  vint32m1_t a = vreinterpret_v_i16m1_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_i16m2_i32m2(vint16m2_t src)
+{
+  vint32m2_t a = vreinterpret_v_i16m2_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_i16m4_i32m4(vint16m4_t src)
+{
+  vint32m4_t a = vreinterpret_v_i16m4_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_i16m8_i32m8(vint16m8_t src)
+{
+  vint32m8_t a = vreinterpret_v_i16m8_i32m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_i16m1_i64m1(vint16m1_t src)
+{
+  vint64m1_t a = vreinterpret_v_i16m1_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_i16m2_i64m2(vint16m2_t src)
+{
+  vint64m2_t a = vreinterpret_v_i16m2_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_i16m4_i64m4(vint16m4_t src)
+{
+  vint64m4_t a = vreinterpret_v_i16m4_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_i16m8_i64m8(vint16m8_t src)
+{
+  vint64m8_t a = vreinterpret_v_i16m8_i64m8(src);
+  return a;
+}
+
+vint8mf2_t test_vreinterpret_v_i32mf2_i8mf2(vint32mf2_t src)
+{
+  vint8mf2_t a = vreinterpret_v_i32mf2_i8mf2(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_i32m1_i8m1(vint32m1_t src)
+{
+  vint8m1_t a = vreinterpret_v_i32m1_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_i32m2_i8m2(vint32m2_t src)
+{
+  vint8m2_t a = vreinterpret_v_i32m2_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_i32m4_i8m4(vint32m4_t src)
+{
+  vint8m4_t a = vreinterpret_v_i32m4_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_i32m8_i8m8(vint32m8_t src)
+{
+  vint8m8_t a = vreinterpret_v_i32m8_i8m8(src);
+  return a;
+}
+
+vint16mf2_t test_vreinterpret_v_i32mf2_i16mf2(vint32mf2_t src)
+{
+  vint16mf2_t a = vreinterpret_v_i32mf2_i16mf2(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_i32m1_i16m1(vint32m1_t src)
+{
+  vint16m1_t a = vreinterpret_v_i32m1_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_i32m2_i16m2(vint32m2_t src)
+{
+  vint16m2_t a = vreinterpret_v_i32m2_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_i32m4_i16m4(vint32m4_t src)
+{
+  vint16m4_t a = vreinterpret_v_i32m4_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_i32m8_i16m8(vint32m8_t src)
+{
+  vint16m8_t a = vreinterpret_v_i32m8_i16m8(src);
+  return a;
+}
+
+vint64m1_t test_vreinterpret_v_i32m1_i64m1(vint32m1_t src)
+{
+  vint64m1_t a = vreinterpret_v_i32m1_i64m1(src);
+  return a;
+}
+
+vint64m2_t test_vreinterpret_v_i32m2_i64m2(vint32m2_t src)
+{
+  vint64m2_t a = vreinterpret_v_i32m2_i64m2(src);
+  return a;
+}
+
+vint64m4_t test_vreinterpret_v_i32m4_i64m4(vint32m4_t src)
+{
+  vint64m4_t a = vreinterpret_v_i32m4_i64m4(src);
+  return a;
+}
+
+vint64m8_t test_vreinterpret_v_i32m8_i64m8(vint32m8_t src)
+{
+  vint64m8_t a = vreinterpret_v_i32m8_i64m8(src);
+  return a;
+}
+
+vint8m1_t test_vreinterpret_v_i64m1_i8m1(vint64m1_t src)
+{
+  vint8m1_t a = vreinterpret_v_i64m1_i8m1(src);
+  return a;
+}
+
+vint8m2_t test_vreinterpret_v_i64m2_i8m2(vint64m2_t src)
+{
+  vint8m2_t a = vreinterpret_v_i64m2_i8m2(src);
+  return a;
+}
+
+vint8m4_t test_vreinterpret_v_i64m4_i8m4(vint64m4_t src)
+{
+  vint8m4_t a = vreinterpret_v_i64m4_i8m4(src);
+  return a;
+}
+
+vint8m8_t test_vreinterpret_v_i64m8_i8m8(vint64m8_t src)
+{
+  vint8m8_t a = vreinterpret_v_i64m8_i8m8(src);
+  return a;
+}
+
+vint16m1_t test_vreinterpret_v_i64m1_i16m1(vint64m1_t src)
+{
+  vint16m1_t a = vreinterpret_v_i64m1_i16m1(src);
+  return a;
+}
+
+vint16m2_t test_vreinterpret_v_i64m2_i16m2(vint64m2_t src)
+{
+  vint16m2_t a = vreinterpret_v_i64m2_i16m2(src);
+  return a;
+}
+
+vint16m4_t test_vreinterpret_v_i64m4_i16m4(vint64m4_t src)
+{
+  vint16m4_t a = vreinterpret_v_i64m4_i16m4(src);
+  return a;
+}
+
+vint16m8_t test_vreinterpret_v_i64m8_i16m8(vint64m8_t src)
+{
+  vint16m8_t a = vreinterpret_v_i64m8_i16m8(src);
+  return a;
+}
+
+vint32m1_t test_vreinterpret_v_i64m1_i32m1(vint64m1_t src)
+{
+  vint32m1_t a = vreinterpret_v_i64m1_i32m1(src);
+  return a;
+}
+
+vint32m2_t test_vreinterpret_v_i64m2_i32m2(vint64m2_t src)
+{
+  vint32m2_t a = vreinterpret_v_i64m2_i32m2(src);
+  return a;
+}
+
+vint32m4_t test_vreinterpret_v_i64m4_i32m4(vint64m4_t src)
+{
+  vint32m4_t a = vreinterpret_v_i64m4_i32m4(src);
+  return a;
+}
+
+vint32m8_t test_vreinterpret_v_i64m8_i32m8(vint64m8_t src)
+{
+  vint32m8_t a = vreinterpret_v_i64m8_i32m8(src);
+  return a;
+}
+
+vuint16mf4_t test_vreinterpret_v_u8mf4_u16mf4(vuint8mf4_t src)
+{
+  vuint16mf4_t a = vreinterpret_v_u8mf4_u16mf4(src);
+  return a;
+}
+
+vuint16mf2_t test_vreinterpret_v_u8mf2_u16mf2(vuint8mf2_t src)
+{
+  vuint16mf2_t a = vreinterpret_v_u8mf2_u16mf2(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_u8m1_u16m1(vuint8m1_t src)
+{
+  vuint16m1_t a = vreinterpret_v_u8m1_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_u8m2_u16m2(vuint8m2_t src)
+{
+  vuint16m2_t a = vreinterpret_v_u8m2_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_u8m4_u16m4(vuint8m4_t src)
+{
+  vuint16m4_t a = vreinterpret_v_u8m4_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_u8m8_u16m8(vuint8m8_t src)
+{
+  vuint16m8_t a = vreinterpret_v_u8m8_u16m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_u8mf2_u32mf2(vuint8mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_v_u8mf2_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_u8m1_u32m1(vuint8m1_t src)
+{
+  vuint32m1_t a = vreinterpret_v_u8m1_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_u8m2_u32m2(vuint8m2_t src)
+{
+  vuint32m2_t a = vreinterpret_v_u8m2_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_u8m4_u32m4(vuint8m4_t src)
+{
+  vuint32m4_t a = vreinterpret_v_u8m4_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_u8m8_u32m8(vuint8m8_t src)
+{
+  vuint32m8_t a = vreinterpret_v_u8m8_u32m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_u8m1_u64m1(vuint8m1_t src)
+{
+  vuint64m1_t a = vreinterpret_v_u8m1_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_u8m2_u64m2(vuint8m2_t src)
+{
+  vuint64m2_t a = vreinterpret_v_u8m2_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_u8m4_u64m4(vuint8m4_t src)
+{
+  vuint64m4_t a = vreinterpret_v_u8m4_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_u8m8_u64m8(vuint8m8_t src)
+{
+  vuint64m8_t a = vreinterpret_v_u8m8_u64m8(src);
+  return a;
+}
+
+vuint8mf4_t test_vreinterpret_v_u16mf4_u8mf4(vuint16mf4_t src)
+{
+  vuint8mf4_t a = vreinterpret_v_u16mf4_u8mf4(src);
+  return a;
+}
+
+vuint8mf2_t test_vreinterpret_v_u16mf2_u8mf2(vuint16mf2_t src)
+{
+  vuint8mf2_t a = vreinterpret_v_u16mf2_u8mf2(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_u16m1_u8m1(vuint16m1_t src)
+{
+  vuint8m1_t a = vreinterpret_v_u16m1_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_u16m2_u8m2(vuint16m2_t src)
+{
+  vuint8m2_t a = vreinterpret_v_u16m2_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_u16m4_u8m4(vuint16m4_t src)
+{
+  vuint8m4_t a = vreinterpret_v_u16m4_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_u16m8_u8m8(vuint16m8_t src)
+{
+  vuint8m8_t a = vreinterpret_v_u16m8_u8m8(src);
+  return a;
+}
+
+vuint32mf2_t test_vreinterpret_v_u16mf2_u32mf2(vuint16mf2_t src)
+{
+  vuint32mf2_t a = vreinterpret_v_u16mf2_u32mf2(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_u16m1_u32m1(vuint16m1_t src)
+{
+  vuint32m1_t a = vreinterpret_v_u16m1_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_u16m2_u32m2(vuint16m2_t src)
+{
+  vuint32m2_t a = vreinterpret_v_u16m2_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_u16m4_u32m4(vuint16m4_t src)
+{
+  vuint32m4_t a = vreinterpret_v_u16m4_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_u16m8_u32m8(vuint16m8_t src)
+{
+  vuint32m8_t a = vreinterpret_v_u16m8_u32m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_u16m1_u64m1(vuint16m1_t src)
+{
+  vuint64m1_t a = vreinterpret_v_u16m1_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_u16m2_u64m2(vuint16m2_t src)
+{
+  vuint64m2_t a = vreinterpret_v_u16m2_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_u16m4_u64m4(vuint16m4_t src)
+{
+  vuint64m4_t a = vreinterpret_v_u16m4_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_u16m8_u64m8(vuint16m8_t src)
+{
+  vuint64m8_t a = vreinterpret_v_u16m8_u64m8(src);
+  return a;
+}
+
+vuint8mf2_t test_vreinterpret_v_u32mf2_u8mf2(vuint32mf2_t src)
+{
+  vuint8mf2_t a = vreinterpret_v_u32mf2_u8mf2(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_u32m1_u8m1(vuint32m1_t src)
+{
+  vuint8m1_t a = vreinterpret_v_u32m1_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_u32m2_u8m2(vuint32m2_t src)
+{
+  vuint8m2_t a = vreinterpret_v_u32m2_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_u32m4_u8m4(vuint32m4_t src)
+{
+  vuint8m4_t a = vreinterpret_v_u32m4_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_u32m8_u8m8(vuint32m8_t src)
+{
+  vuint8m8_t a = vreinterpret_v_u32m8_u8m8(src);
+  return a;
+}
+
+vuint16mf2_t test_vreinterpret_v_u32mf2_u16mf2(vuint32mf2_t src)
+{
+  vuint16mf2_t a = vreinterpret_v_u32mf2_u16mf2(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_u32m1_u16m1(vuint32m1_t src)
+{
+  vuint16m1_t a = vreinterpret_v_u32m1_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_u32m2_u16m2(vuint32m2_t src)
+{
+  vuint16m2_t a = vreinterpret_v_u32m2_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_u32m4_u16m4(vuint32m4_t src)
+{
+  vuint16m4_t a = vreinterpret_v_u32m4_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_u32m8_u16m8(vuint32m8_t src)
+{
+  vuint16m8_t a = vreinterpret_v_u32m8_u16m8(src);
+  return a;
+}
+
+vuint64m1_t test_vreinterpret_v_u32m1_u64m1(vuint32m1_t src)
+{
+  vuint64m1_t a = vreinterpret_v_u32m1_u64m1(src);
+  return a;
+}
+
+vuint64m2_t test_vreinterpret_v_u32m2_u64m2(vuint32m2_t src)
+{
+  vuint64m2_t a = vreinterpret_v_u32m2_u64m2(src);
+  return a;
+}
+
+vuint64m4_t test_vreinterpret_v_u32m4_u64m4(vuint32m4_t src)
+{
+  vuint64m4_t a = vreinterpret_v_u32m4_u64m4(src);
+  return a;
+}
+
+vuint64m8_t test_vreinterpret_v_u32m8_u64m8(vuint32m8_t src)
+{
+  vuint64m8_t a = vreinterpret_v_u32m8_u64m8(src);
+  return a;
+}
+
+vuint8m1_t test_vreinterpret_v_u64m1_u8m1(vuint64m1_t src)
+{
+  vuint8m1_t a = vreinterpret_v_u64m1_u8m1(src);
+  return a;
+}
+
+vuint8m2_t test_vreinterpret_v_u64m2_u8m2(vuint64m2_t src)
+{
+  vuint8m2_t a = vreinterpret_v_u64m2_u8m2(src);
+  return a;
+}
+
+vuint8m4_t test_vreinterpret_v_u64m4_u8m4(vuint64m4_t src)
+{
+  vuint8m4_t a = vreinterpret_v_u64m4_u8m4(src);
+  return a;
+}
+
+vuint8m8_t test_vreinterpret_v_u64m8_u8m8(vuint64m8_t src)
+{
+  vuint8m8_t a = vreinterpret_v_u64m8_u8m8(src);
+  return a;
+}
+
+vuint16m1_t test_vreinterpret_v_u64m1_u16m1(vuint64m1_t src)
+{
+  vuint16m1_t a = vreinterpret_v_u64m1_u16m1(src);
+  return a;
+}
+
+vuint16m2_t test_vreinterpret_v_u64m2_u16m2(vuint64m2_t src)
+{
+  vuint16m2_t a = vreinterpret_v_u64m2_u16m2(src);
+  return a;
+}
+
+vuint16m4_t test_vreinterpret_v_u64m4_u16m4(vuint64m4_t src)
+{
+  vuint16m4_t a = vreinterpret_v_u64m4_u16m4(src);
+  return a;
+}
+
+vuint16m8_t test_vreinterpret_v_u64m8_u16m8(vuint64m8_t src)
+{
+  vuint16m8_t a = vreinterpret_v_u64m8_u16m8(src);
+  return a;
+}
+
+vuint32m1_t test_vreinterpret_v_u64m1_u32m1(vuint64m1_t src)
+{
+  vuint32m1_t a = vreinterpret_v_u64m1_u32m1(src);
+  return a;
+}
+
+vuint32m2_t test_vreinterpret_v_u64m2_u32m2(vuint64m2_t src)
+{
+  vuint32m2_t a = vreinterpret_v_u64m2_u32m2(src);
+  return a;
+}
+
+vuint32m4_t test_vreinterpret_v_u64m4_u32m4(vuint64m4_t src)
+{
+  vuint32m4_t a = vreinterpret_v_u64m4_u32m4(src);
+  return a;
+}
+
+vuint32m8_t test_vreinterpret_v_u64m8_u32m8(vuint64m8_t src)
+{
+  vuint32m8_t a = vreinterpret_v_u64m8_u32m8(src);
+  return a;
+}
+
+vint8mf4_t test_vlmul_ext_v_i8mf8_i8mf4(vint8mf8_t op1)
+{
+  vint8mf4_t a = vlmul_ext_v_i8mf8_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_ext_v_i8mf8_i8mf2(vint8mf8_t op1)
+{
+  vint8mf2_t a = vlmul_ext_v_i8mf8_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_ext_v_i8mf8_i8m1(vint8mf8_t op1)
+{
+  vint8m1_t a = vlmul_ext_v_i8mf8_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8mf8_i8m2(vint8mf8_t op1)
+{
+  vint8m2_t a = vlmul_ext_v_i8mf8_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8mf8_i8m4(vint8mf8_t op1)
+{
+  vint8m4_t a = vlmul_ext_v_i8mf8_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8mf8_i8m8(vint8mf8_t op1)
+{
+  vint8m8_t a = vlmul_ext_v_i8mf8_i8m8(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_ext_v_i8mf4_i8mf2(vint8mf4_t op1)
+{
+  vint8mf2_t a = vlmul_ext_v_i8mf4_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_ext_v_i8mf4_i8m1(vint8mf4_t op1)
+{
+  vint8m1_t a = vlmul_ext_v_i8mf4_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8mf4_i8m2(vint8mf4_t op1)
+{
+  vint8m2_t a = vlmul_ext_v_i8mf4_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8mf4_i8m4(vint8mf4_t op1)
+{
+  vint8m4_t a = vlmul_ext_v_i8mf4_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8mf4_i8m8(vint8mf4_t op1)
+{
+  vint8m8_t a = vlmul_ext_v_i8mf4_i8m8(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_ext_v_i8mf2_i8m1(vint8mf2_t op1)
+{
+  vint8m1_t a = vlmul_ext_v_i8mf2_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8mf2_i8m2(vint8mf2_t op1)
+{
+  vint8m2_t a = vlmul_ext_v_i8mf2_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8mf2_i8m4(vint8mf2_t op1)
+{
+  vint8m4_t a = vlmul_ext_v_i8mf2_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8mf2_i8m8(vint8mf2_t op1)
+{
+  vint8m8_t a = vlmul_ext_v_i8mf2_i8m8(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_ext_v_i8m1_i8m2(vint8m1_t op1)
+{
+  vint8m2_t a = vlmul_ext_v_i8m1_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8m1_i8m4(vint8m1_t op1)
+{
+  vint8m4_t a = vlmul_ext_v_i8m1_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8m1_i8m8(vint8m1_t op1)
+{
+  vint8m8_t a = vlmul_ext_v_i8m1_i8m8(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_ext_v_i8m2_i8m4(vint8m2_t op1)
+{
+  vint8m4_t a = vlmul_ext_v_i8m2_i8m4(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8m2_i8m8(vint8m2_t op1)
+{
+  vint8m8_t a = vlmul_ext_v_i8m2_i8m8(op1);
+  return a;
+}
+
+vint8m8_t test_vlmul_ext_v_i8m4_i8m8(vint8m4_t op1)
+{
+  vint8m8_t a = vlmul_ext_v_i8m4_i8m8(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_ext_v_i16mf4_i16mf2(vint16mf4_t op1)
+{
+  vint16mf2_t a = vlmul_ext_v_i16mf4_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_ext_v_i16mf4_i16m1(vint16mf4_t op1)
+{
+  vint16m1_t a = vlmul_ext_v_i16mf4_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_ext_v_i16mf4_i16m2(vint16mf4_t op1)
+{
+  vint16m2_t a = vlmul_ext_v_i16mf4_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16mf4_i16m4(vint16mf4_t op1)
+{
+  vint16m4_t a = vlmul_ext_v_i16mf4_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16mf4_i16m8(vint16mf4_t op1)
+{
+  vint16m8_t a = vlmul_ext_v_i16mf4_i16m8(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_ext_v_i16mf2_i16m1(vint16mf2_t op1)
+{
+  vint16m1_t a = vlmul_ext_v_i16mf2_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_ext_v_i16mf2_i16m2(vint16mf2_t op1)
+{
+  vint16m2_t a = vlmul_ext_v_i16mf2_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16mf2_i16m4(vint16mf2_t op1)
+{
+  vint16m4_t a = vlmul_ext_v_i16mf2_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16mf2_i16m8(vint16mf2_t op1)
+{
+  vint16m8_t a = vlmul_ext_v_i16mf2_i16m8(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_ext_v_i16m1_i16m2(vint16m1_t op1)
+{
+  vint16m2_t a = vlmul_ext_v_i16m1_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16m1_i16m4(vint16m1_t op1)
+{
+  vint16m4_t a = vlmul_ext_v_i16m1_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16m1_i16m8(vint16m1_t op1)
+{
+  vint16m8_t a = vlmul_ext_v_i16m1_i16m8(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_ext_v_i16m2_i16m4(vint16m2_t op1)
+{
+  vint16m4_t a = vlmul_ext_v_i16m2_i16m4(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16m2_i16m8(vint16m2_t op1)
+{
+  vint16m8_t a = vlmul_ext_v_i16m2_i16m8(op1);
+  return a;
+}
+
+vint16m8_t test_vlmul_ext_v_i16m4_i16m8(vint16m4_t op1)
+{
+  vint16m8_t a = vlmul_ext_v_i16m4_i16m8(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_ext_v_i32mf2_i32m1(vint32mf2_t op1)
+{
+  vint32m1_t a = vlmul_ext_v_i32mf2_i32m1(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_ext_v_i32mf2_i32m2(vint32mf2_t op1)
+{
+  vint32m2_t a = vlmul_ext_v_i32mf2_i32m2(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_ext_v_i32mf2_i32m4(vint32mf2_t op1)
+{
+  vint32m4_t a = vlmul_ext_v_i32mf2_i32m4(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32mf2_i32m8(vint32mf2_t op1)
+{
+  vint32m8_t a = vlmul_ext_v_i32mf2_i32m8(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_ext_v_i32m1_i32m2(vint32m1_t op1)
+{
+  vint32m2_t a = vlmul_ext_v_i32m1_i32m2(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_ext_v_i32m1_i32m4(vint32m1_t op1)
+{
+  vint32m4_t a = vlmul_ext_v_i32m1_i32m4(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32m1_i32m8(vint32m1_t op1)
+{
+  vint32m8_t a = vlmul_ext_v_i32m1_i32m8(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_ext_v_i32m2_i32m4(vint32m2_t op1)
+{
+  vint32m4_t a = vlmul_ext_v_i32m2_i32m4(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32m2_i32m8(vint32m2_t op1)
+{
+  vint32m8_t a = vlmul_ext_v_i32m2_i32m8(op1);
+  return a;
+}
+
+vint32m8_t test_vlmul_ext_v_i32m4_i32m8(vint32m4_t op1)
+{
+  vint32m8_t a = vlmul_ext_v_i32m4_i32m8(op1);
+  return a;
+}
+
+vint64m2_t test_vlmul_ext_v_i64m1_i64m2(vint64m1_t op1)
+{
+  vint64m2_t a = vlmul_ext_v_i64m1_i64m2(op1);
+  return a;
+}
+
+vint64m4_t test_vlmul_ext_v_i64m1_i64m4(vint64m1_t op1)
+{
+  vint64m4_t a = vlmul_ext_v_i64m1_i64m4(op1);
+  return a;
+}
+
+vint64m8_t test_vlmul_ext_v_i64m1_i64m8(vint64m1_t op1)
+{
+  vint64m8_t a = vlmul_ext_v_i64m1_i64m8(op1);
+  return a;
+}
+
+vint64m4_t test_vlmul_ext_v_i64m2_i64m4(vint64m2_t op1)
+{
+  vint64m4_t a = vlmul_ext_v_i64m2_i64m4(op1);
+  return a;
+}
+
+vint64m8_t test_vlmul_ext_v_i64m2_i64m8(vint64m2_t op1)
+{
+  vint64m8_t a = vlmul_ext_v_i64m2_i64m8(op1);
+  return a;
+}
+
+vint64m8_t test_vlmul_ext_v_i64m4_i64m8(vint64m4_t op1)
+{
+  vint64m8_t a = vlmul_ext_v_i64m4_i64m8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_ext_v_u8mf8_u8mf4(vuint8mf8_t op1)
+{
+  vuint8mf4_t a = vlmul_ext_v_u8mf8_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_ext_v_u8mf8_u8mf2(vuint8mf8_t op1)
+{
+  vuint8mf2_t a = vlmul_ext_v_u8mf8_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_ext_v_u8mf8_u8m1(vuint8mf8_t op1)
+{
+  vuint8m1_t a = vlmul_ext_v_u8mf8_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8mf8_u8m2(vuint8mf8_t op1)
+{
+  vuint8m2_t a = vlmul_ext_v_u8mf8_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8mf8_u8m4(vuint8mf8_t op1)
+{
+  vuint8m4_t a = vlmul_ext_v_u8mf8_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8mf8_u8m8(vuint8mf8_t op1)
+{
+  vuint8m8_t a = vlmul_ext_v_u8mf8_u8m8(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_ext_v_u8mf4_u8mf2(vuint8mf4_t op1)
+{
+  vuint8mf2_t a = vlmul_ext_v_u8mf4_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_ext_v_u8mf4_u8m1(vuint8mf4_t op1)
+{
+  vuint8m1_t a = vlmul_ext_v_u8mf4_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8mf4_u8m2(vuint8mf4_t op1)
+{
+  vuint8m2_t a = vlmul_ext_v_u8mf4_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8mf4_u8m4(vuint8mf4_t op1)
+{
+  vuint8m4_t a = vlmul_ext_v_u8mf4_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8mf4_u8m8(vuint8mf4_t op1)
+{
+  vuint8m8_t a = vlmul_ext_v_u8mf4_u8m8(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_ext_v_u8mf2_u8m1(vuint8mf2_t op1)
+{
+  vuint8m1_t a = vlmul_ext_v_u8mf2_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8mf2_u8m2(vuint8mf2_t op1)
+{
+  vuint8m2_t a = vlmul_ext_v_u8mf2_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8mf2_u8m4(vuint8mf2_t op1)
+{
+  vuint8m4_t a = vlmul_ext_v_u8mf2_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8mf2_u8m8(vuint8mf2_t op1)
+{
+  vuint8m8_t a = vlmul_ext_v_u8mf2_u8m8(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_ext_v_u8m1_u8m2(vuint8m1_t op1)
+{
+  vuint8m2_t a = vlmul_ext_v_u8m1_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8m1_u8m4(vuint8m1_t op1)
+{
+  vuint8m4_t a = vlmul_ext_v_u8m1_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8m1_u8m8(vuint8m1_t op1)
+{
+  vuint8m8_t a = vlmul_ext_v_u8m1_u8m8(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_ext_v_u8m2_u8m4(vuint8m2_t op1)
+{
+  vuint8m4_t a = vlmul_ext_v_u8m2_u8m4(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8m2_u8m8(vuint8m2_t op1)
+{
+  vuint8m8_t a = vlmul_ext_v_u8m2_u8m8(op1);
+  return a;
+}
+
+vuint8m8_t test_vlmul_ext_v_u8m4_u8m8(vuint8m4_t op1)
+{
+  vuint8m8_t a = vlmul_ext_v_u8m4_u8m8(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_ext_v_u16mf4_u16mf2(vuint16mf4_t op1)
+{
+  vuint16mf2_t a = vlmul_ext_v_u16mf4_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_ext_v_u16mf4_u16m1(vuint16mf4_t op1)
+{
+  vuint16m1_t a = vlmul_ext_v_u16mf4_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_ext_v_u16mf4_u16m2(vuint16mf4_t op1)
+{
+  vuint16m2_t a = vlmul_ext_v_u16mf4_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16mf4_u16m4(vuint16mf4_t op1)
+{
+  vuint16m4_t a = vlmul_ext_v_u16mf4_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16mf4_u16m8(vuint16mf4_t op1)
+{
+  vuint16m8_t a = vlmul_ext_v_u16mf4_u16m8(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_ext_v_u16mf2_u16m1(vuint16mf2_t op1)
+{
+  vuint16m1_t a = vlmul_ext_v_u16mf2_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_ext_v_u16mf2_u16m2(vuint16mf2_t op1)
+{
+  vuint16m2_t a = vlmul_ext_v_u16mf2_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16mf2_u16m4(vuint16mf2_t op1)
+{
+  vuint16m4_t a = vlmul_ext_v_u16mf2_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16mf2_u16m8(vuint16mf2_t op1)
+{
+  vuint16m8_t a = vlmul_ext_v_u16mf2_u16m8(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_ext_v_u16m1_u16m2(vuint16m1_t op1)
+{
+  vuint16m2_t a = vlmul_ext_v_u16m1_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16m1_u16m4(vuint16m1_t op1)
+{
+  vuint16m4_t a = vlmul_ext_v_u16m1_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16m1_u16m8(vuint16m1_t op1)
+{
+  vuint16m8_t a = vlmul_ext_v_u16m1_u16m8(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_ext_v_u16m2_u16m4(vuint16m2_t op1)
+{
+  vuint16m4_t a = vlmul_ext_v_u16m2_u16m4(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16m2_u16m8(vuint16m2_t op1)
+{
+  vuint16m8_t a = vlmul_ext_v_u16m2_u16m8(op1);
+  return a;
+}
+
+vuint16m8_t test_vlmul_ext_v_u16m4_u16m8(vuint16m4_t op1)
+{
+  vuint16m8_t a = vlmul_ext_v_u16m4_u16m8(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_ext_v_u32mf2_u32m1(vuint32mf2_t op1)
+{
+  vuint32m1_t a = vlmul_ext_v_u32mf2_u32m1(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_ext_v_u32mf2_u32m2(vuint32mf2_t op1)
+{
+  vuint32m2_t a = vlmul_ext_v_u32mf2_u32m2(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_ext_v_u32mf2_u32m4(vuint32mf2_t op1)
+{
+  vuint32m4_t a = vlmul_ext_v_u32mf2_u32m4(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32mf2_u32m8(vuint32mf2_t op1)
+{
+  vuint32m8_t a = vlmul_ext_v_u32mf2_u32m8(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_ext_v_u32m1_u32m2(vuint32m1_t op1)
+{
+  vuint32m2_t a = vlmul_ext_v_u32m1_u32m2(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_ext_v_u32m1_u32m4(vuint32m1_t op1)
+{
+  vuint32m4_t a = vlmul_ext_v_u32m1_u32m4(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32m1_u32m8(vuint32m1_t op1)
+{
+  vuint32m8_t a = vlmul_ext_v_u32m1_u32m8(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_ext_v_u32m2_u32m4(vuint32m2_t op1)
+{
+  vuint32m4_t a = vlmul_ext_v_u32m2_u32m4(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32m2_u32m8(vuint32m2_t op1)
+{
+  vuint32m8_t a = vlmul_ext_v_u32m2_u32m8(op1);
+  return a;
+}
+
+vuint32m8_t test_vlmul_ext_v_u32m4_u32m8(vuint32m4_t op1)
+{
+  vuint32m8_t a = vlmul_ext_v_u32m4_u32m8(op1);
+  return a;
+}
+
+vuint64m2_t test_vlmul_ext_v_u64m1_u64m2(vuint64m1_t op1)
+{
+  vuint64m2_t a = vlmul_ext_v_u64m1_u64m2(op1);
+  return a;
+}
+
+vuint64m4_t test_vlmul_ext_v_u64m1_u64m4(vuint64m1_t op1)
+{
+  vuint64m4_t a = vlmul_ext_v_u64m1_u64m4(op1);
+  return a;
+}
+
+vuint64m8_t test_vlmul_ext_v_u64m1_u64m8(vuint64m1_t op1)
+{
+  vuint64m8_t a = vlmul_ext_v_u64m1_u64m8(op1);
+  return a;
+}
+
+vuint64m4_t test_vlmul_ext_v_u64m2_u64m4(vuint64m2_t op1)
+{
+  vuint64m4_t a = vlmul_ext_v_u64m2_u64m4(op1);
+  return a;
+}
+
+vuint64m8_t test_vlmul_ext_v_u64m2_u64m8(vuint64m2_t op1)
+{
+  vuint64m8_t a = vlmul_ext_v_u64m2_u64m8(op1);
+  return a;
+}
+
+vuint64m8_t test_vlmul_ext_v_u64m4_u64m8(vuint64m4_t op1)
+{
+  vuint64m8_t a = vlmul_ext_v_u64m4_u64m8(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_ext_v_f32mf2_f32m1(vfloat32mf2_t op1)
+{
+  vfloat32m1_t a = vlmul_ext_v_f32mf2_f32m1(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_ext_v_f32mf2_f32m2(vfloat32mf2_t op1)
+{
+  vfloat32m2_t a = vlmul_ext_v_f32mf2_f32m2(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_ext_v_f32mf2_f32m4(vfloat32mf2_t op1)
+{
+  vfloat32m4_t a = vlmul_ext_v_f32mf2_f32m4(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32mf2_f32m8(vfloat32mf2_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_v_f32mf2_f32m8(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_ext_v_f32m1_f32m2(vfloat32m1_t op1)
+{
+  vfloat32m2_t a = vlmul_ext_v_f32m1_f32m2(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_ext_v_f32m1_f32m4(vfloat32m1_t op1)
+{
+  vfloat32m4_t a = vlmul_ext_v_f32m1_f32m4(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32m1_f32m8(vfloat32m1_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_v_f32m1_f32m8(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_ext_v_f32m2_f32m4(vfloat32m2_t op1)
+{
+  vfloat32m4_t a = vlmul_ext_v_f32m2_f32m4(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32m2_f32m8(vfloat32m2_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_v_f32m2_f32m8(op1);
+  return a;
+}
+
+vfloat32m8_t test_vlmul_ext_v_f32m4_f32m8(vfloat32m4_t op1)
+{
+  vfloat32m8_t a = vlmul_ext_v_f32m4_f32m8(op1);
+  return a;
+}
+
+vfloat64m2_t test_vlmul_ext_v_f64m1_f64m2(vfloat64m1_t op1)
+{
+  vfloat64m2_t a = vlmul_ext_v_f64m1_f64m2(op1);
+  return a;
+}
+
+vfloat64m4_t test_vlmul_ext_v_f64m1_f64m4(vfloat64m1_t op1)
+{
+  vfloat64m4_t a = vlmul_ext_v_f64m1_f64m4(op1);
+  return a;
+}
+
+vfloat64m8_t test_vlmul_ext_v_f64m1_f64m8(vfloat64m1_t op1)
+{
+  vfloat64m8_t a = vlmul_ext_v_f64m1_f64m8(op1);
+  return a;
+}
+
+vfloat64m4_t test_vlmul_ext_v_f64m2_f64m4(vfloat64m2_t op1)
+{
+  vfloat64m4_t a = vlmul_ext_v_f64m2_f64m4(op1);
+  return a;
+}
+
+vfloat64m8_t test_vlmul_ext_v_f64m2_f64m8(vfloat64m2_t op1)
+{
+  vfloat64m8_t a = vlmul_ext_v_f64m2_f64m8(op1);
+  return a;
+}
+
+vfloat64m8_t test_vlmul_ext_v_f64m4_f64m8(vfloat64m4_t op1)
+{
+  vfloat64m8_t a = vlmul_ext_v_f64m4_f64m8(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8mf4_i8mf8(vint8mf4_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_v_i8mf4_i8mf8(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8mf2_i8mf8(vint8mf2_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_v_i8mf2_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8mf2_i8mf4(vint8mf2_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_v_i8mf2_i8mf4(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m1_i8mf8(vint8m1_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_v_i8m1_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m1_i8mf4(vint8m1_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_v_i8m1_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m1_i8mf2(vint8m1_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_v_i8m1_i8mf2(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m2_i8mf8(vint8m2_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_v_i8m2_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m2_i8mf4(vint8m2_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_v_i8m2_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m2_i8mf2(vint8m2_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_v_i8m2_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_trunc_v_i8m2_i8m1(vint8m2_t op1)
+{
+  vint8m1_t a = vlmul_trunc_v_i8m2_i8m1(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m4_i8mf8(vint8m4_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_v_i8m4_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m4_i8mf4(vint8m4_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_v_i8m4_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m4_i8mf2(vint8m4_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_v_i8m4_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_trunc_v_i8m4_i8m1(vint8m4_t op1)
+{
+  vint8m1_t a = vlmul_trunc_v_i8m4_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_trunc_v_i8m4_i8m2(vint8m4_t op1)
+{
+  vint8m2_t a = vlmul_trunc_v_i8m4_i8m2(op1);
+  return a;
+}
+
+vint8mf8_t test_vlmul_trunc_v_i8m8_i8mf8(vint8m8_t op1)
+{
+  vint8mf8_t a = vlmul_trunc_v_i8m8_i8mf8(op1);
+  return a;
+}
+
+vint8mf4_t test_vlmul_trunc_v_i8m8_i8mf4(vint8m8_t op1)
+{
+  vint8mf4_t a = vlmul_trunc_v_i8m8_i8mf4(op1);
+  return a;
+}
+
+vint8mf2_t test_vlmul_trunc_v_i8m8_i8mf2(vint8m8_t op1)
+{
+  vint8mf2_t a = vlmul_trunc_v_i8m8_i8mf2(op1);
+  return a;
+}
+
+vint8m1_t test_vlmul_trunc_v_i8m8_i8m1(vint8m8_t op1)
+{
+  vint8m1_t a = vlmul_trunc_v_i8m8_i8m1(op1);
+  return a;
+}
+
+vint8m2_t test_vlmul_trunc_v_i8m8_i8m2(vint8m8_t op1)
+{
+  vint8m2_t a = vlmul_trunc_v_i8m8_i8m2(op1);
+  return a;
+}
+
+vint8m4_t test_vlmul_trunc_v_i8m8_i8m4(vint8m8_t op1)
+{
+  vint8m4_t a = vlmul_trunc_v_i8m8_i8m4(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16mf2_i16mf4(vint16mf2_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_v_i16mf2_i16mf4(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m1_i16mf4(vint16m1_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_v_i16m1_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m1_i16mf2(vint16m1_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_v_i16m1_i16mf2(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m2_i16mf4(vint16m2_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_v_i16m2_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m2_i16mf2(vint16m2_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_v_i16m2_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_trunc_v_i16m2_i16m1(vint16m2_t op1)
+{
+  vint16m1_t a = vlmul_trunc_v_i16m2_i16m1(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m4_i16mf4(vint16m4_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_v_i16m4_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m4_i16mf2(vint16m4_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_v_i16m4_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_trunc_v_i16m4_i16m1(vint16m4_t op1)
+{
+  vint16m1_t a = vlmul_trunc_v_i16m4_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_trunc_v_i16m4_i16m2(vint16m4_t op1)
+{
+  vint16m2_t a = vlmul_trunc_v_i16m4_i16m2(op1);
+  return a;
+}
+
+vint16mf4_t test_vlmul_trunc_v_i16m8_i16mf4(vint16m8_t op1)
+{
+  vint16mf4_t a = vlmul_trunc_v_i16m8_i16mf4(op1);
+  return a;
+}
+
+vint16mf2_t test_vlmul_trunc_v_i16m8_i16mf2(vint16m8_t op1)
+{
+  vint16mf2_t a = vlmul_trunc_v_i16m8_i16mf2(op1);
+  return a;
+}
+
+vint16m1_t test_vlmul_trunc_v_i16m8_i16m1(vint16m8_t op1)
+{
+  vint16m1_t a = vlmul_trunc_v_i16m8_i16m1(op1);
+  return a;
+}
+
+vint16m2_t test_vlmul_trunc_v_i16m8_i16m2(vint16m8_t op1)
+{
+  vint16m2_t a = vlmul_trunc_v_i16m8_i16m2(op1);
+  return a;
+}
+
+vint16m4_t test_vlmul_trunc_v_i16m8_i16m4(vint16m8_t op1)
+{
+  vint16m4_t a = vlmul_trunc_v_i16m8_i16m4(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m1_i32mf2(vint32m1_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_v_i32m1_i32mf2(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m2_i32mf2(vint32m2_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_v_i32m2_i32mf2(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_trunc_v_i32m2_i32m1(vint32m2_t op1)
+{
+  vint32m1_t a = vlmul_trunc_v_i32m2_i32m1(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m4_i32mf2(vint32m4_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_v_i32m4_i32mf2(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_trunc_v_i32m4_i32m1(vint32m4_t op1)
+{
+  vint32m1_t a = vlmul_trunc_v_i32m4_i32m1(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_trunc_v_i32m4_i32m2(vint32m4_t op1)
+{
+  vint32m2_t a = vlmul_trunc_v_i32m4_i32m2(op1);
+  return a;
+}
+
+vint32mf2_t test_vlmul_trunc_v_i32m8_i32mf2(vint32m8_t op1)
+{
+  vint32mf2_t a = vlmul_trunc_v_i32m8_i32mf2(op1);
+  return a;
+}
+
+vint32m1_t test_vlmul_trunc_v_i32m8_i32m1(vint32m8_t op1)
+{
+  vint32m1_t a = vlmul_trunc_v_i32m8_i32m1(op1);
+  return a;
+}
+
+vint32m2_t test_vlmul_trunc_v_i32m8_i32m2(vint32m8_t op1)
+{
+  vint32m2_t a = vlmul_trunc_v_i32m8_i32m2(op1);
+  return a;
+}
+
+vint32m4_t test_vlmul_trunc_v_i32m8_i32m4(vint32m8_t op1)
+{
+  vint32m4_t a = vlmul_trunc_v_i32m8_i32m4(op1);
+  return a;
+}
+
+vint64m1_t test_vlmul_trunc_v_i64m2_i64m1(vint64m2_t op1)
+{
+  vint64m1_t a = vlmul_trunc_v_i64m2_i64m1(op1);
+  return a;
+}
+
+vint64m1_t test_vlmul_trunc_v_i64m4_i64m1(vint64m4_t op1)
+{
+  vint64m1_t a = vlmul_trunc_v_i64m4_i64m1(op1);
+  return a;
+}
+
+vint64m2_t test_vlmul_trunc_v_i64m4_i64m2(vint64m4_t op1)
+{
+  vint64m2_t a = vlmul_trunc_v_i64m4_i64m2(op1);
+  return a;
+}
+
+vint64m1_t test_vlmul_trunc_v_i64m8_i64m1(vint64m8_t op1)
+{
+  vint64m1_t a = vlmul_trunc_v_i64m8_i64m1(op1);
+  return a;
+}
+
+vint64m2_t test_vlmul_trunc_v_i64m8_i64m2(vint64m8_t op1)
+{
+  vint64m2_t a = vlmul_trunc_v_i64m8_i64m2(op1);
+  return a;
+}
+
+vint64m4_t test_vlmul_trunc_v_i64m8_i64m4(vint64m8_t op1)
+{
+  vint64m4_t a = vlmul_trunc_v_i64m8_i64m4(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8mf4_u8mf8(vuint8mf4_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_v_u8mf4_u8mf8(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8mf2_u8mf8(vuint8mf2_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_v_u8mf2_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8mf2_u8mf4(vuint8mf2_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_v_u8mf2_u8mf4(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m1_u8mf8(vuint8m1_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_v_u8m1_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m1_u8mf4(vuint8m1_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_v_u8m1_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m1_u8mf2(vuint8m1_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_v_u8m1_u8mf2(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m2_u8mf8(vuint8m2_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_v_u8m2_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m2_u8mf4(vuint8m2_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_v_u8m2_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m2_u8mf2(vuint8m2_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_v_u8m2_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_trunc_v_u8m2_u8m1(vuint8m2_t op1)
+{
+  vuint8m1_t a = vlmul_trunc_v_u8m2_u8m1(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m4_u8mf8(vuint8m4_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_v_u8m4_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m4_u8mf4(vuint8m4_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_v_u8m4_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m4_u8mf2(vuint8m4_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_v_u8m4_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_trunc_v_u8m4_u8m1(vuint8m4_t op1)
+{
+  vuint8m1_t a = vlmul_trunc_v_u8m4_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_trunc_v_u8m4_u8m2(vuint8m4_t op1)
+{
+  vuint8m2_t a = vlmul_trunc_v_u8m4_u8m2(op1);
+  return a;
+}
+
+vuint8mf8_t test_vlmul_trunc_v_u8m8_u8mf8(vuint8m8_t op1)
+{
+  vuint8mf8_t a = vlmul_trunc_v_u8m8_u8mf8(op1);
+  return a;
+}
+
+vuint8mf4_t test_vlmul_trunc_v_u8m8_u8mf4(vuint8m8_t op1)
+{
+  vuint8mf4_t a = vlmul_trunc_v_u8m8_u8mf4(op1);
+  return a;
+}
+
+vuint8mf2_t test_vlmul_trunc_v_u8m8_u8mf2(vuint8m8_t op1)
+{
+  vuint8mf2_t a = vlmul_trunc_v_u8m8_u8mf2(op1);
+  return a;
+}
+
+vuint8m1_t test_vlmul_trunc_v_u8m8_u8m1(vuint8m8_t op1)
+{
+  vuint8m1_t a = vlmul_trunc_v_u8m8_u8m1(op1);
+  return a;
+}
+
+vuint8m2_t test_vlmul_trunc_v_u8m8_u8m2(vuint8m8_t op1)
+{
+  vuint8m2_t a = vlmul_trunc_v_u8m8_u8m2(op1);
+  return a;
+}
+
+vuint8m4_t test_vlmul_trunc_v_u8m8_u8m4(vuint8m8_t op1)
+{
+  vuint8m4_t a = vlmul_trunc_v_u8m8_u8m4(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16mf2_u16mf4(vuint16mf2_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_v_u16mf2_u16mf4(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m1_u16mf4(vuint16m1_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_v_u16m1_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m1_u16mf2(vuint16m1_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_v_u16m1_u16mf2(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m2_u16mf4(vuint16m2_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_v_u16m2_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m2_u16mf2(vuint16m2_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_v_u16m2_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_trunc_v_u16m2_u16m1(vuint16m2_t op1)
+{
+  vuint16m1_t a = vlmul_trunc_v_u16m2_u16m1(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m4_u16mf4(vuint16m4_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_v_u16m4_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m4_u16mf2(vuint16m4_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_v_u16m4_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_trunc_v_u16m4_u16m1(vuint16m4_t op1)
+{
+  vuint16m1_t a = vlmul_trunc_v_u16m4_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_trunc_v_u16m4_u16m2(vuint16m4_t op1)
+{
+  vuint16m2_t a = vlmul_trunc_v_u16m4_u16m2(op1);
+  return a;
+}
+
+vuint16mf4_t test_vlmul_trunc_v_u16m8_u16mf4(vuint16m8_t op1)
+{
+  vuint16mf4_t a = vlmul_trunc_v_u16m8_u16mf4(op1);
+  return a;
+}
+
+vuint16mf2_t test_vlmul_trunc_v_u16m8_u16mf2(vuint16m8_t op1)
+{
+  vuint16mf2_t a = vlmul_trunc_v_u16m8_u16mf2(op1);
+  return a;
+}
+
+vuint16m1_t test_vlmul_trunc_v_u16m8_u16m1(vuint16m8_t op1)
+{
+  vuint16m1_t a = vlmul_trunc_v_u16m8_u16m1(op1);
+  return a;
+}
+
+vuint16m2_t test_vlmul_trunc_v_u16m8_u16m2(vuint16m8_t op1)
+{
+  vuint16m2_t a = vlmul_trunc_v_u16m8_u16m2(op1);
+  return a;
+}
+
+vuint16m4_t test_vlmul_trunc_v_u16m8_u16m4(vuint16m8_t op1)
+{
+  vuint16m4_t a = vlmul_trunc_v_u16m8_u16m4(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m1_u32mf2(vuint32m1_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_v_u32m1_u32mf2(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m2_u32mf2(vuint32m2_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_v_u32m2_u32mf2(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_trunc_v_u32m2_u32m1(vuint32m2_t op1)
+{
+  vuint32m1_t a = vlmul_trunc_v_u32m2_u32m1(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m4_u32mf2(vuint32m4_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_v_u32m4_u32mf2(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_trunc_v_u32m4_u32m1(vuint32m4_t op1)
+{
+  vuint32m1_t a = vlmul_trunc_v_u32m4_u32m1(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_trunc_v_u32m4_u32m2(vuint32m4_t op1)
+{
+  vuint32m2_t a = vlmul_trunc_v_u32m4_u32m2(op1);
+  return a;
+}
+
+vuint32mf2_t test_vlmul_trunc_v_u32m8_u32mf2(vuint32m8_t op1)
+{
+  vuint32mf2_t a = vlmul_trunc_v_u32m8_u32mf2(op1);
+  return a;
+}
+
+vuint32m1_t test_vlmul_trunc_v_u32m8_u32m1(vuint32m8_t op1)
+{
+  vuint32m1_t a = vlmul_trunc_v_u32m8_u32m1(op1);
+  return a;
+}
+
+vuint32m2_t test_vlmul_trunc_v_u32m8_u32m2(vuint32m8_t op1)
+{
+  vuint32m2_t a = vlmul_trunc_v_u32m8_u32m2(op1);
+  return a;
+}
+
+vuint32m4_t test_vlmul_trunc_v_u32m8_u32m4(vuint32m8_t op1)
+{
+  vuint32m4_t a = vlmul_trunc_v_u32m8_u32m4(op1);
+  return a;
+}
+
+vuint64m1_t test_vlmul_trunc_v_u64m2_u64m1(vuint64m2_t op1)
+{
+  vuint64m1_t a = vlmul_trunc_v_u64m2_u64m1(op1);
+  return a;
+}
+
+vuint64m1_t test_vlmul_trunc_v_u64m4_u64m1(vuint64m4_t op1)
+{
+  vuint64m1_t a = vlmul_trunc_v_u64m4_u64m1(op1);
+  return a;
+}
+
+vuint64m2_t test_vlmul_trunc_v_u64m4_u64m2(vuint64m4_t op1)
+{
+  vuint64m2_t a = vlmul_trunc_v_u64m4_u64m2(op1);
+  return a;
+}
+
+vuint64m1_t test_vlmul_trunc_v_u64m8_u64m1(vuint64m8_t op1)
+{
+  vuint64m1_t a = vlmul_trunc_v_u64m8_u64m1(op1);
+  return a;
+}
+
+vuint64m2_t test_vlmul_trunc_v_u64m8_u64m2(vuint64m8_t op1)
+{
+  vuint64m2_t a = vlmul_trunc_v_u64m8_u64m2(op1);
+  return a;
+}
+
+vuint64m4_t test_vlmul_trunc_v_u64m8_u64m4(vuint64m8_t op1)
+{
+  vuint64m4_t a = vlmul_trunc_v_u64m8_u64m4(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m1_f32mf2(vfloat32m1_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_v_f32m1_f32mf2(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m2_f32mf2(vfloat32m2_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_v_f32m2_f32mf2(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_trunc_v_f32m2_f32m1(vfloat32m2_t op1)
+{
+  vfloat32m1_t a = vlmul_trunc_v_f32m2_f32m1(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m4_f32mf2(vfloat32m4_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_v_f32m4_f32mf2(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_trunc_v_f32m4_f32m1(vfloat32m4_t op1)
+{
+  vfloat32m1_t a = vlmul_trunc_v_f32m4_f32m1(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_trunc_v_f32m4_f32m2(vfloat32m4_t op1)
+{
+  vfloat32m2_t a = vlmul_trunc_v_f32m4_f32m2(op1);
+  return a;
+}
+
+vfloat32mf2_t test_vlmul_trunc_v_f32m8_f32mf2(vfloat32m8_t op1)
+{
+  vfloat32mf2_t a = vlmul_trunc_v_f32m8_f32mf2(op1);
+  return a;
+}
+
+vfloat32m1_t test_vlmul_trunc_v_f32m8_f32m1(vfloat32m8_t op1)
+{
+  vfloat32m1_t a = vlmul_trunc_v_f32m8_f32m1(op1);
+  return a;
+}
+
+vfloat32m2_t test_vlmul_trunc_v_f32m8_f32m2(vfloat32m8_t op1)
+{
+  vfloat32m2_t a = vlmul_trunc_v_f32m8_f32m2(op1);
+  return a;
+}
+
+vfloat32m4_t test_vlmul_trunc_v_f32m8_f32m4(vfloat32m8_t op1)
+{
+  vfloat32m4_t a = vlmul_trunc_v_f32m8_f32m4(op1);
+  return a;
+}
+
+vfloat64m1_t test_vlmul_trunc_v_f64m2_f64m1(vfloat64m2_t op1)
+{
+  vfloat64m1_t a = vlmul_trunc_v_f64m2_f64m1(op1);
+  return a;
+}
+
+vfloat64m1_t test_vlmul_trunc_v_f64m4_f64m1(vfloat64m4_t op1)
+{
+  vfloat64m1_t a = vlmul_trunc_v_f64m4_f64m1(op1);
+  return a;
+}
+
+vfloat64m2_t test_vlmul_trunc_v_f64m4_f64m2(vfloat64m4_t op1)
+{
+  vfloat64m2_t a = vlmul_trunc_v_f64m4_f64m2(op1);
+  return a;
+}
+
+vfloat64m1_t test_vlmul_trunc_v_f64m8_f64m1(vfloat64m8_t op1)
+{
+  vfloat64m1_t a = vlmul_trunc_v_f64m8_f64m1(op1);
+  return a;
+}
+
+vfloat64m2_t test_vlmul_trunc_v_f64m8_f64m2(vfloat64m8_t op1)
+{
+  vfloat64m2_t a = vlmul_trunc_v_f64m8_f64m2(op1);
+  return a;
+}
+
+vfloat64m4_t test_vlmul_trunc_v_f64m8_f64m4(vfloat64m8_t op1)
+{
+  vfloat64m4_t a = vlmul_trunc_v_f64m8_f64m4(op1);
+  return a;
+}
+
+vint8mf8_t test_vundefined_i8mf8()
+{
+  vint8mf8_t a = vundefined_i8mf8();
+  return a;
+}
+
+vint8mf4_t test_vundefined_i8mf4()
+{
+  vint8mf4_t a = vundefined_i8mf4();
+  return a;
+}
+
+vint8mf2_t test_vundefined_i8mf2()
+{
+  vint8mf2_t a = vundefined_i8mf2();
+  return a;
+}
+
+vint8m1_t test_vundefined_i8m1()
+{
+  vint8m1_t a = vundefined_i8m1();
+  return a;
+}
+
+vint8m2_t test_vundefined_i8m2()
+{
+  vint8m2_t a = vundefined_i8m2();
+  return a;
+}
+
+vint8m4_t test_vundefined_i8m4()
+{
+  vint8m4_t a = vundefined_i8m4();
+  return a;
+}
+
+vint8m8_t test_vundefined_i8m8()
+{
+  vint8m8_t a = vundefined_i8m8();
+  return a;
+}
+
+vint16mf4_t test_vundefined_i16mf4()
+{
+  vint16mf4_t a = vundefined_i16mf4();
+  return a;
+}
+
+vint16mf2_t test_vundefined_i16mf2()
+{
+  vint16mf2_t a = vundefined_i16mf2();
+  return a;
+}
+
+vint16m1_t test_vundefined_i16m1()
+{
+  vint16m1_t a = vundefined_i16m1();
+  return a;
+}
+
+vint16m2_t test_vundefined_i16m2()
+{
+  vint16m2_t a = vundefined_i16m2();
+  return a;
+}
+
+vint16m4_t test_vundefined_i16m4()
+{
+  vint16m4_t a = vundefined_i16m4();
+  return a;
+}
+
+vint16m8_t test_vundefined_i16m8()
+{
+  vint16m8_t a = vundefined_i16m8();
+  return a;
+}
+
+vint32mf2_t test_vundefined_i32mf2()
+{
+  vint32mf2_t a = vundefined_i32mf2();
+  return a;
+}
+
+vint32m1_t test_vundefined_i32m1()
+{
+  vint32m1_t a = vundefined_i32m1();
+  return a;
+}
+
+vint32m2_t test_vundefined_i32m2()
+{
+  vint32m2_t a = vundefined_i32m2();
+  return a;
+}
+
+vint32m4_t test_vundefined_i32m4()
+{
+  vint32m4_t a = vundefined_i32m4();
+  return a;
+}
+
+vint32m8_t test_vundefined_i32m8()
+{
+  vint32m8_t a = vundefined_i32m8();
+  return a;
+}
+
+vint64m1_t test_vundefined_i64m1()
+{
+  vint64m1_t a = vundefined_i64m1();
+  return a;
+}
+
+vint64m2_t test_vundefined_i64m2()
+{
+  vint64m2_t a = vundefined_i64m2();
+  return a;
+}
+
+vint64m4_t test_vundefined_i64m4()
+{
+  vint64m4_t a = vundefined_i64m4();
+  return a;
+}
+
+vint64m8_t test_vundefined_i64m8()
+{
+  vint64m8_t a = vundefined_i64m8();
+  return a;
+}
+
+vuint8mf8_t test_vundefined_u8mf8()
+{
+  vuint8mf8_t a = vundefined_u8mf8();
+  return a;
+}
+
+vuint8mf4_t test_vundefined_u8mf4()
+{
+  vuint8mf4_t a = vundefined_u8mf4();
+  return a;
+}
+
+vuint8mf2_t test_vundefined_u8mf2()
+{
+  vuint8mf2_t a = vundefined_u8mf2();
+  return a;
+}
+
+vuint8m1_t test_vundefined_u8m1()
+{
+  vuint8m1_t a = vundefined_u8m1();
+  return a;
+}
+
+vuint8m2_t test_vundefined_u8m2()
+{
+  vuint8m2_t a = vundefined_u8m2();
+  return a;
+}
+
+vuint8m4_t test_vundefined_u8m4()
+{
+  vuint8m4_t a = vundefined_u8m4();
+  return a;
+}
+
+vuint8m8_t test_vundefined_u8m8()
+{
+  vuint8m8_t a = vundefined_u8m8();
+  return a;
+}
+
+vuint16mf4_t test_vundefined_u16mf4()
+{
+  vuint16mf4_t a = vundefined_u16mf4();
+  return a;
+}
+
+vuint16mf2_t test_vundefined_u16mf2()
+{
+  vuint16mf2_t a = vundefined_u16mf2();
+  return a;
+}
+
+vuint16m1_t test_vundefined_u16m1()
+{
+  vuint16m1_t a = vundefined_u16m1();
+  return a;
+}
+
+vuint16m2_t test_vundefined_u16m2()
+{
+  vuint16m2_t a = vundefined_u16m2();
+  return a;
+}
+
+vuint16m4_t test_vundefined_u16m4()
+{
+  vuint16m4_t a = vundefined_u16m4();
+  return a;
+}
+
+vuint16m8_t test_vundefined_u16m8()
+{
+  vuint16m8_t a = vundefined_u16m8();
+  return a;
+}
+
+vuint32mf2_t test_vundefined_u32mf2()
+{
+  vuint32mf2_t a = vundefined_u32mf2();
+  return a;
+}
+
+vuint32m1_t test_vundefined_u32m1()
+{
+  vuint32m1_t a = vundefined_u32m1();
+  return a;
+}
+
+vuint32m2_t test_vundefined_u32m2()
+{
+  vuint32m2_t a = vundefined_u32m2();
+  return a;
+}
+
+vuint32m4_t test_vundefined_u32m4()
+{
+  vuint32m4_t a = vundefined_u32m4();
+  return a;
+}
+
+vuint32m8_t test_vundefined_u32m8()
+{
+  vuint32m8_t a = vundefined_u32m8();
+  return a;
+}
+
+vuint64m1_t test_vundefined_u64m1()
+{
+  vuint64m1_t a = vundefined_u64m1();
+  return a;
+}
+
+vuint64m2_t test_vundefined_u64m2()
+{
+  vuint64m2_t a = vundefined_u64m2();
+  return a;
+}
+
+vuint64m4_t test_vundefined_u64m4()
+{
+  vuint64m4_t a = vundefined_u64m4();
+  return a;
+}
+
+vuint64m8_t test_vundefined_u64m8()
+{
+  vuint64m8_t a = vundefined_u64m8();
+  return a;
+}
+
+vfloat32mf2_t test_vundefined_f32mf2()
+{
+  vfloat32mf2_t a = vundefined_f32mf2();
+  return a;
+}
+
+vfloat32m1_t test_vundefined_f32m1()
+{
+  vfloat32m1_t a = vundefined_f32m1();
+  return a;
+}
+
+vfloat32m2_t test_vundefined_f32m2()
+{
+  vfloat32m2_t a = vundefined_f32m2();
+  return a;
+}
+
+vfloat32m4_t test_vundefined_f32m4()
+{
+  vfloat32m4_t a = vundefined_f32m4();
+  return a;
+}
+
+vfloat32m8_t test_vundefined_f32m8()
+{
+  vfloat32m8_t a = vundefined_f32m8();
+  return a;
+}
+
+vfloat64m1_t test_vundefined_f64m1()
+{
+  vfloat64m1_t a = vundefined_f64m1();
+  return a;
+}
+
+vfloat64m2_t test_vundefined_f64m2()
+{
+  vfloat64m2_t a = vundefined_f64m2();
+  return a;
+}
+
+vfloat64m4_t test_vundefined_f64m4()
+{
+  vfloat64m4_t a = vundefined_f64m4();
+  return a;
+}
+
+vfloat64m8_t test_vundefined_f64m8()
+{
+  vfloat64m8_t a = vundefined_f64m8();
+  return a;
+}
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 11/21]   Add calling function support
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (8 preceding siblings ...)
  2022-05-31  8:50 ` [PATCH 09/21] Add misc function intrinsic support juzhe.zhong
@ 2022-05-31  8:50 ` juzhe.zhong
  2022-05-31  8:50 ` [PATCH 12/21] Add set get intrinsic support juzhe.zhong
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:50 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv.cc (struct riscv_arg_info): Add calling convention support.
        (riscv_get_arg_info): Add calling convention support.
        (riscv_function_arg_advance): Add calling convention support.
        (riscv_pass_by_reference): Add calling convention support.
        * config/riscv/riscv.h (GCC_RISCV_H): include <stdbool.h>.
        (V_RETURN): New macro define.
        (MAX_ARGS_IN_VECTOR_REGISTERS): New macro define.
        (MAX_ARGS_IN_MASK_REGISTERS): New macro define.
        (V_ARG_FIRST): New macro define.
        (V_ARG_LAST): New macro define.

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/rvv/custom/calling_convention_1.c: New test.
        * gcc.target/riscv/rvv/custom/rvv-custom.exp: New test.
        
---
 gcc/config/riscv/riscv.cc                     | 90 +++++++++++++++++++
 gcc/config/riscv/riscv.h                      | 14 +++
 .../riscv/rvv/custom/calling_convention_1.c   | 46 ++++++++++
 .../riscv/rvv/custom/rvv-custom.exp           | 47 ++++++++++
 4 files changed, 197 insertions(+)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/custom/calling_convention_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/custom/rvv-custom.exp

diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index e88057e992a..832c1754002 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -181,6 +181,18 @@ struct riscv_arg_info {
 
   /* The offset of the first register used, provided num_fprs is nonzero.  */
   unsigned int fpr_offset;
+
+  /* The number of vector registers allocated to this argument.  */
+  unsigned int num_vrs;
+
+  /* The offset of the first register used, provided num_vrs is nonzero.  */
+  unsigned int vr_offset;
+
+  /* The number of mask registers allocated to this argument.  */
+  unsigned int num_mrs;
+
+  /* The offset of the first register used, provided num_mrs is nonzero.  */
+  unsigned int mr_offset;
 };
 
 /* Information about an address described by riscv_address_type.
@@ -3225,11 +3237,13 @@ riscv_get_arg_info (struct riscv_arg_info *info, const CUMULATIVE_ARGS *cum,
   unsigned num_bytes, num_words;
   unsigned fpr_base = return_p ? FP_RETURN : FP_ARG_FIRST;
   unsigned gpr_base = return_p ? GP_RETURN : GP_ARG_FIRST;
+	unsigned vr_base = return_p ? V_RETURN : V_ARG_FIRST;
   unsigned alignment = riscv_function_arg_boundary (mode, type);
 
   memset (info, 0, sizeof (*info));
   info->gpr_offset = cum->num_gprs;
   info->fpr_offset = cum->num_fprs;
+	info->mr_offset = cum->num_mrs;
 
   if (named)
     {
@@ -3292,6 +3306,67 @@ riscv_get_arg_info (struct riscv_arg_info *info, const CUMULATIVE_ARGS *cum,
 				      gregno, TYPE_MODE (fields[1].type),
 				      fields[1].offset);
 	}
+      /*  Pass vectors in VRs. For the argument contain scalable vectors,
+          for example: foo (vint8m1_t a), we pass this in VRs to reduce
+          redundant register spills. The maximum vector arg registers is
+          MAX_ARGS_IN_VECTOR_REGISTERS. */
+      if (rvv_mode_p (mode))
+	{
+	  /*  For return vector register, we use V_RETURN as default. */
+	  if (return_p)
+	    {
+	      if (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL)
+	        return gen_rtx_REG (mode, V_REG_FIRST);
+	      else
+	        return gen_rtx_REG (mode, vr_base);
+	    }
+	  /* The first mask register in argument we use is v0, the res of them
+	     we use v8,v9,.....etc same as vector registers.  */
+	  if (GET_MODE_CLASS (mode) == MODE_VECTOR_BOOL)
+	    {
+	      info->num_mrs = 1;
+	
+	      if (info->mr_offset + info->num_mrs <= MAX_ARGS_IN_MASK_REGISTERS)
+	        return gen_rtx_REG (mode, V_REG_FIRST);
+	    }
+	  /*  The number of vectors to pass in the function arg.
+	      When the mode size is less than a full vector, we
+	      use 1 vector to pass. */
+	  int nvecs;
+	  nvecs = known_le (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR) ? 1 :
+	      exact_div (GET_MODE_SIZE (mode), BYTES_PER_RISCV_VECTOR).to_constant ();
+	  int align = rvv_regsize (mode);
+	  for (int i = 0; i + nvecs <= MAX_ARGS_IN_VECTOR_REGISTERS; i += 1)
+	    {
+	      if (!cum->used_vrs[i] && (i + 8) % align == 0)
+	        {
+	          bool find_space = true;
+	          int j = 1;
+	          for (; j < nvecs; j += 1)
+	            {
+	              if (cum->used_vrs[i + j])
+	                {
+	                  find_space = false;
+	                  break;
+	                }
+	            }
+	          if (find_space)
+	            {
+	              info->num_vrs = nvecs;
+	              info->vr_offset = i;
+	              return gen_rtx_REG(mode, vr_base + i);
+	            }
+	          else
+	            {
+	              /* skip the j num registers which can not be used */
+	              i += j;
+	            }
+	        }
+	    }
+	  info->num_vrs = 0;
+	  info->num_mrs = 0;
+	  return NULL_RTX;
+	}
     }
 
   /* Work out the size of the argument.  */
@@ -3344,6 +3419,15 @@ riscv_function_arg_advance (cumulative_args_t cum_v,
      argument on the stack.  */
   cum->num_fprs = info.fpr_offset + info.num_fprs;
   cum->num_gprs = info.gpr_offset + info.num_gprs;
+	if (info.num_vrs > 0)
+    {
+      for (unsigned int i = 0; i < info.num_vrs; i += 1)
+        {
+          /* set current used vector registers */
+          cum->used_vrs[info.vr_offset + i] = true;
+        }
+    }
+	cum->num_mrs = info.mr_offset + info.num_mrs;
 }
 
 /* Implement TARGET_ARG_PARTIAL_BYTES.  */
@@ -3401,6 +3485,12 @@ riscv_pass_by_reference (cumulative_args_t cum_v, const function_arg_info &arg)
       /* Don't pass by reference if we can use a floating-point register.  */
       riscv_get_arg_info (&info, cum, arg.mode, arg.type, arg.named, false);
       if (info.num_fprs)
+	return false;
+	    /* Don't pass by reference if we can use a RVV vector register.  */
+      if (info.num_vrs)
+	return false;
+	    /* Don't pass by reference if we can use a RVV mask register.  */
+      if (info.num_mrs)
 	return false;
     }
 
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index c54c984e70b..5de745bc824 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -22,6 +22,7 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_RISCV_H
 #define GCC_RISCV_H
 
+#include <stdbool.h>
 #include "config/riscv/riscv-opts.h"
 
 /* Target CPU builtins.  */
@@ -620,8 +621,13 @@ enum reg_class
 
 #define GP_RETURN GP_ARG_FIRST
 #define FP_RETURN (UNITS_PER_FP_ARG == 0 ? GP_RETURN : FP_ARG_FIRST)
+#define V_RETURN V_ARG_FIRST
 
 #define MAX_ARGS_IN_REGISTERS (riscv_abi == ABI_ILP32E ? 6 : 8)
+/*  Follow the calling convention in LLVM,
+    maximum 16 vector registers and 1 mask register in function arg.  */
+#define MAX_ARGS_IN_VECTOR_REGISTERS (16)
+#define MAX_ARGS_IN_MASK_REGISTERS (1)
 
 /* Symbolic macros for the first/last argument registers.  */
 
@@ -630,6 +636,8 @@ enum reg_class
 #define GP_TEMP_FIRST (GP_REG_FIRST + 5)
 #define FP_ARG_FIRST (FP_REG_FIRST + 10)
 #define FP_ARG_LAST  (FP_ARG_FIRST + MAX_ARGS_IN_REGISTERS - 1)
+#define V_ARG_FIRST (V_REG_FIRST + 8)
+#define V_ARG_LAST  (V_ARG_FIRST + MAX_ARGS_IN_VECTOR_REGISTERS - 1)
 
 #define CALLEE_SAVED_REG_NUMBER(REGNO)			\
   ((REGNO) >= 8 && (REGNO) <= 9 ? (REGNO) - 8 :		\
@@ -657,6 +665,12 @@ typedef struct {
 
   /* Number of floating-point registers used so far, likewise.  */
   unsigned int num_fprs;
+
+  /* The used state of args in vectors, 1 for used by prev arg, initial to 0 */
+  bool used_vrs[MAX_ARGS_IN_VECTOR_REGISTERS];
+
+  /* Number of mask registers used so far, up to MAX_ARGS_IN_MASK_REGISTERS. */
+  unsigned int num_mrs;
 } CUMULATIVE_ARGS;
 
 /* Initialize a variable CUM of type CUMULATIVE_ARGS
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/custom/calling_convention_1.c b/gcc/testsuite/gcc.target/riscv/rvv/custom/calling_convention_1.c
new file mode 100644
index 00000000000..5f79da2a9d6
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/custom/calling_convention_1.c
@@ -0,0 +1,46 @@
+/* { dg-do compile } */
+/* { dg-additional-options "-O3" } */
+
+#include <stddef.h>
+#include <riscv_vector.h>
+
+void
+test (
+  vint8mf2_t a, vint8m4_t c, vint8m4_t d,vint8m2_t e, vint8m2_t b,  
+  vint8m1_t f,vint8m1_t f2, vint8m1_t f3, vint8m1_t f4,  vint8m2_t g,  vint8m2_t h,  
+  vbool16_t m1, vbool4_t m2, 
+  int8_t *a1, int8_t *b1, int8_t *c1,
+  size_t vl)
+{
+  /* f4 => a0, g => a1, h => a2, m2 => a3, 
+     a1 => a4, b1 => a5, c1 => a6, vl => a7
+
+     m1 => v0
+     a => v8, c => v12, d => v16, e => v10, 
+     b => v20, f => v9, f2 => v22, f3 => v23 */
+
+  vse8_v_i8mf2_m(m1, a1, a, vl);
+
+  vse8_v_i8m2_m(m2, b1, b, vl);
+
+  vse8_v_i8m4(c1, c, vl);
+  vse8_v_i8m4(c1, d, vl);
+
+  vse8_v_i8m2(c1, e, vl);
+
+  vse8_v_i8m1(c1, f, vl);
+  vse8_v_i8m1(c1, f2, vl);
+  vse8_v_i8m1(c1, f3, vl);
+  vse8_v_i8m1(c1, f4, vl);
+
+  vse8_v_i8m2(c1, g, vl);
+  vse8_v_i8m2(c1, h, vl);
+}
+/* { dg-final { scan-assembler-times {vse8.v\s+v8,\s*\(a4\),\s*v0.t} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v20,\s*\(a5\),\s*v0.t} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v12,\s*\(a6\)} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v16,\s*\(a6\)} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v10,\s*\(a6\)} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v9,\s*\(a6\)} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v22,\s*\(a6\)} 1 } } */
+/* { dg-final { scan-assembler-times {vse8.v\s+v23,\s*\(a6\)} 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/custom/rvv-custom.exp b/gcc/testsuite/gcc.target/riscv/rvv/custom/rvv-custom.exp
new file mode 100644
index 00000000000..4956ac0b184
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/custom/rvv-custom.exp
@@ -0,0 +1,47 @@
+# Copyright (C) 2022-2022 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS "-pedantic-errors -ansi"
+}
+
+set gcc_march "rv64gcv_zfh"
+if [istarget riscv32-*-*] then {
+  set gcc_march "rv32gcv_zfh"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+set CFLAGS "$DEFAULT_CFLAGS -march=$gcc_march -std=gnu11"
+dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.c]] \
+	"" $CFLAGS
+
+# All done.
+dg-finish
\ No newline at end of file
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 12/21]   Add set get intrinsic support
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (9 preceding siblings ...)
  2022-05-31  8:50 ` [PATCH 11/21] Add calling function support juzhe.zhong
@ 2022-05-31  8:50 ` juzhe.zhong
  2022-05-31  8:50 ` [PATCH 13/21] Adjust scalable frame and full testcases juzhe.zhong
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:50 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-vector-builtins-functions.cc (vset::assemble_name): New function.
        (vset::get_argument_types): New function.
        (vset::expand): New function.
        (vget::assemble_name): New function.
        (vget::get_argument_types): New function.
        (vget::expand): New function.
        * config/riscv/riscv-vector-builtins-functions.def (vset): New macro define.
        (vget): New macro define.
        * config/riscv/riscv-vector-builtins-functions.h (class vset): New class.
        (class vget): New class.

gcc/testsuite/ChangeLog:

        * g++.target/riscv/rvv/set-get.C: New test.
        * gcc.target/riscv/rvv/intrinsic/set-get.c: New test.
        
---
 .../riscv/riscv-vector-builtins-functions.cc  |  73 ++
 .../riscv/riscv-vector-builtins-functions.def |   6 +
 .../riscv/riscv-vector-builtins-functions.h   |  28 +
 gcc/testsuite/g++.target/riscv/rvv/set-get.C  | 730 ++++++++++++++++++
 .../gcc.target/riscv/rvv/intrinsic/set-get.c  | 730 ++++++++++++++++++
 5 files changed, 1567 insertions(+)
 create mode 100644 gcc/testsuite/g++.target/riscv/rvv/set-get.C
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/intrinsic/set-get.c

diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc
index fa39eedcd86..9d2895c3d3e 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc
@@ -1510,6 +1510,79 @@ vundefined::expand (const function_instance &, tree, rtx target) const
   return target;
 }
 
+/* A function implementation for vset functions.  */
+char *
+vset::assemble_name (function_instance &instance)
+{
+  machine_mode tmode = instance.get_arg_pattern ().arg_list[0];
+  machine_mode smode = instance.get_arg_pattern ().arg_list[2];
+  if (GET_MODE_INNER (tmode) != GET_MODE_INNER (smode))
+    return nullptr;
+
+  if (tmode == smode)
+    return nullptr;
+  
+  if (known_lt (GET_MODE_SIZE (tmode), GET_MODE_SIZE (smode)))
+    return nullptr;
+    
+  intrinsic_rename (instance, 0, 2);
+  append_name (instance.get_base_name ());
+  return finish_name ();
+}
+
+void
+vset::get_argument_types (const function_instance &instance,
+                          vec<tree> &argument_types) const
+{
+  misc::get_argument_types (instance, argument_types);
+  argument_types.quick_push (size_type_node);
+  argument_types.quick_push (get_dt_t_with_index (instance, 2));
+}
+
+rtx
+vset::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  enum insn_code icode = code_for_vset (instance.get_arg_pattern ().arg_list[0]);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vget functions.  */
+char *
+vget::assemble_name (function_instance &instance)
+{
+  machine_mode tmode = instance.get_arg_pattern ().arg_list[0];
+  machine_mode smode = instance.get_arg_pattern ().arg_list[1];
+  if (GET_MODE_INNER (tmode) != GET_MODE_INNER (smode))
+    return nullptr;
+
+  if (tmode == smode)
+    return nullptr;
+  
+  if (known_gt (GET_MODE_SIZE (tmode), GET_MODE_SIZE (smode)))
+    return nullptr;
+  
+  bool unsigned_p = instance.get_data_type_list ()[0] == DT_unsigned;
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  append_name (mode2data_type_str (tmode, unsigned_p, false));
+  return finish_name ();
+}
+
+void
+vget::get_argument_types (const function_instance &instance,
+                          vec<tree> &argument_types) const
+{
+  misc::get_argument_types (instance, argument_types);
+  argument_types.quick_push (size_type_node);
+}
+
+rtx
+vget::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  enum insn_code icode = code_for_vget (instance.get_arg_pattern ().arg_list[0]);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
 /* A function implementation for loadstore functions.  */
 char *
 loadstore::assemble_name (function_instance &instance)
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index deb32ccd031..739ae60fff5 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -56,6 +56,12 @@ DEF_RVV_FUNCTION(vlmul_trunc, vlmul_trunc, (2, VITER(VLMULTRUNC, signed), VITER(
 DEF_RVV_FUNCTION(vundefined, vundefined, (1, VITER(VI, signed)), PAT_none, PRED_none, OP_none)
 DEF_RVV_FUNCTION(vundefined, vundefined, (1, VITER(VI, unsigned)), PAT_none, PRED_none, OP_none)
 DEF_RVV_FUNCTION(vundefined, vundefined, (1, VITER(VF, signed)), PAT_none, PRED_none, OP_none)
+DEF_RVV_FUNCTION(vset, vset, (3, VITER(VSETI, signed), VATTR(0, VSETI, signed), VITER(VFULL, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vset, vset, (3, VITER(VSETI, unsigned), VATTR(0, VSETI, unsigned), VITER(VFULL, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vset, vset, (3, VITER(VSETF, signed), VATTR(0, VSETF, signed), VITER(VFULL, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vget, vget, (2, VITER(VGETI, signed), VITER(VFULL, signed)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vget, vget, (2, VITER(VGETI, unsigned), VITER(VFULL, unsigned)), PAT_none, PRED_none, OP_v)
+DEF_RVV_FUNCTION(vget, vget, (2, VITER(VGETF, signed), VITER(VFULL, signed)), PAT_none, PRED_none, OP_v)
 /* 7. Vector Loads and Stores. */
 DEF_RVV_FUNCTION(vle, vle, (2, VITER(VI, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
 DEF_RVV_FUNCTION(vle, vle, (2, VITER(VI, unsigned), VATTR(0, VSUB, c_uptr)), pat_mask_tail, pred_all, OP_v)
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.h b/gcc/config/riscv/riscv-vector-builtins-functions.h
index c9e1b2a34ca..90063005024 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.h
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.h
@@ -584,6 +584,34 @@ public:
   virtual rtx expand (const function_instance &, tree, rtx) const override;
 };
 
+/* A function_base for vset functions.  */
+class vset : public misc
+{
+public:
+  // use the same construction function as the misc
+  using misc::misc;
+
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
+/* A function_base for vget functions.  */
+class vget : public misc
+{
+public:
+  // use the same construction function as the misc
+  using misc::misc;
+  
+  virtual char * assemble_name (function_instance &) override;
+  
+  virtual void get_argument_types (const function_instance &, vec<tree> &) const override;
+
+  virtual rtx expand (const function_instance &, tree, rtx) const override;
+};
+
 /* A function_base for loadstore functions.  */
 class loadstore : public function_builder
 {
diff --git a/gcc/testsuite/g++.target/riscv/rvv/set-get.C b/gcc/testsuite/g++.target/riscv/rvv/set-get.C
new file mode 100644
index 00000000000..7c8deb96a39
--- /dev/null
+++ b/gcc/testsuite/g++.target/riscv/rvv/set-get.C
@@ -0,0 +1,730 @@
+/* { dg-do compile } */
+/* { dg-skip-if "test vector intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
+
+#include <stddef.h>
+#include <riscv_vector.h>
+
+
+vint8m2_t
+test_vset_v_i8m1_i8m2 (vint8m2_t dest, vint8m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint8m4_t
+test_vset_v_i8m1_i8m4 (vint8m4_t dest, vint8m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint8m4_t
+test_vset_v_i8m2_i8m4 (vint8m4_t dest, vint8m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint8m8_t
+test_vset_v_i8m1_i8m8 (vint8m8_t dest, vint8m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint8m8_t
+test_vset_v_i8m2_i8m8 (vint8m8_t dest, vint8m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint8m8_t
+test_vset_v_i8m4_i8m8 (vint8m8_t dest, vint8m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint8m1_t
+test_vget_v_i8m2_i8m1 (vint8m2_t src)
+{
+  return vget_i8m1(src, 1);
+}
+
+vint8m1_t
+test_vget_v_i8m4_i8m1 (vint8m4_t src)
+{
+  return vget_i8m1(src, 1);
+}
+
+vint8m1_t
+test_vget_v_i8m8_i8m1 (vint8m8_t src)
+{
+  return vget_i8m1(src, 1);
+}
+
+vint8m2_t
+test_vget_v_i8m4_i8m2 (vint8m4_t src)
+{
+  return vget_i8m2(src, 1);
+}
+
+vint8m2_t
+test_vget_v_i8m8_i8m2 (vint8m8_t src)
+{
+  return vget_i8m2(src, 1);
+}
+
+vint8m4_t
+test_vget_v_i8m8_i8m4 (vint8m8_t src)
+{
+  return vget_i8m4(src, 1);
+}
+
+vint16m2_t
+test_vset_v_i16m1_i16m2 (vint16m2_t dest, vint16m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint16m4_t
+test_vset_v_i16m1_i16m4 (vint16m4_t dest, vint16m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint16m4_t
+test_vset_v_i16m2_i16m4 (vint16m4_t dest, vint16m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint16m8_t
+test_vset_v_i16m1_i16m8 (vint16m8_t dest, vint16m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint16m8_t
+test_vset_v_i16m2_i16m8 (vint16m8_t dest, vint16m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint16m8_t
+test_vset_v_i16m4_i16m8 (vint16m8_t dest, vint16m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint16m1_t
+test_vget_v_i16m2_i16m1 (vint16m2_t src)
+{
+  return vget_i16m1(src, 1);
+}
+
+vint16m1_t
+test_vget_v_i16m4_i16m1 (vint16m4_t src)
+{
+  return vget_i16m1(src, 1);
+}
+
+vint16m1_t
+test_vget_v_i16m8_i16m1 (vint16m8_t src)
+{
+  return vget_i16m1(src, 1);
+}
+
+vint16m2_t
+test_vget_v_i16m4_i16m2 (vint16m4_t src)
+{
+  return vget_i16m2(src, 1);
+}
+
+vint16m2_t
+test_vget_v_i16m8_i16m2 (vint16m8_t src)
+{
+  return vget_i16m2(src, 1);
+}
+
+vint16m4_t
+test_vget_v_i16m8_i16m4 (vint16m8_t src)
+{
+  return vget_i16m4(src, 1);
+}
+
+vint32m2_t
+test_vset_v_i32m1_i32m2 (vint32m2_t dest, vint32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint32m4_t
+test_vset_v_i32m1_i32m4 (vint32m4_t dest, vint32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint32m4_t
+test_vset_v_i32m2_i32m4 (vint32m4_t dest, vint32m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint32m8_t
+test_vset_v_i32m1_i32m8 (vint32m8_t dest, vint32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint32m8_t
+test_vset_v_i32m2_i32m8 (vint32m8_t dest, vint32m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint32m8_t
+test_vset_v_i32m4_i32m8 (vint32m8_t dest, vint32m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint32m1_t
+test_vget_v_i32m2_i32m1 (vint32m2_t src)
+{
+  return vget_i32m1(src, 1);
+}
+
+vint32m1_t
+test_vget_v_i32m4_i32m1 (vint32m4_t src)
+{
+  return vget_i32m1(src, 1);
+}
+
+vint32m1_t
+test_vget_v_i32m8_i32m1 (vint32m8_t src)
+{
+  return vget_i32m1(src, 1);
+}
+
+vint32m2_t
+test_vget_v_i32m4_i32m2 (vint32m4_t src)
+{
+  return vget_i32m2(src, 1);
+}
+
+vint32m2_t
+test_vget_v_i32m8_i32m2 (vint32m8_t src)
+{
+  return vget_i32m2(src, 1);
+}
+
+vint32m4_t
+test_vget_v_i32m8_i32m4 (vint32m8_t src)
+{
+  return vget_i32m4(src, 1);
+}
+
+vint64m2_t
+test_vset_v_i64m1_i64m2 (vint64m2_t dest, vint64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint64m4_t
+test_vset_v_i64m1_i64m4 (vint64m4_t dest, vint64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint64m4_t
+test_vset_v_i64m2_i64m4 (vint64m4_t dest, vint64m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint64m8_t
+test_vset_v_i64m1_i64m8 (vint64m8_t dest, vint64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint64m8_t
+test_vset_v_i64m2_i64m8 (vint64m8_t dest, vint64m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint64m8_t
+test_vset_v_i64m4_i64m8 (vint64m8_t dest, vint64m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vint64m1_t
+test_vget_v_i64m2_i64m1 (vint64m2_t src)
+{
+  return vget_i64m1(src, 1);
+}
+
+vint64m1_t
+test_vget_v_i64m4_i64m1 (vint64m4_t src)
+{
+  return vget_i64m1(src, 1);
+}
+
+vint64m1_t
+test_vget_v_i64m8_i64m1 (vint64m8_t src)
+{
+  return vget_i64m1(src, 1);
+}
+
+vint64m2_t
+test_vget_v_i64m4_i64m2 (vint64m4_t src)
+{
+  return vget_i64m2(src, 1);
+}
+
+vint64m2_t
+test_vget_v_i64m8_i64m2 (vint64m8_t src)
+{
+  return vget_i64m2(src, 1);
+}
+
+vint64m4_t
+test_vget_v_i64m8_i64m4 (vint64m8_t src)
+{
+  return vget_i64m4(src, 1);
+}
+
+vuint8m2_t
+test_vset_v_u8m1_u8m2 (vuint8m2_t dest, vuint8m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint8m4_t
+test_vset_v_u8m1_u8m4 (vuint8m4_t dest, vuint8m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint8m4_t
+test_vset_v_u8m2_u8m4 (vuint8m4_t dest, vuint8m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint8m8_t
+test_vset_v_u8m1_u8m8 (vuint8m8_t dest, vuint8m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint8m8_t
+test_vset_v_u8m2_u8m8 (vuint8m8_t dest, vuint8m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint8m8_t
+test_vset_v_u8m4_u8m8 (vuint8m8_t dest, vuint8m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint8m1_t
+test_vget_v_u8m2_u8m1 (vuint8m2_t src)
+{
+  return vget_u8m1(src, 1);
+}
+
+vuint8m1_t
+test_vget_v_u8m4_u8m1 (vuint8m4_t src)
+{
+  return vget_u8m1(src, 1);
+}
+
+vuint8m1_t
+test_vget_v_u8m8_u8m1 (vuint8m8_t src)
+{
+  return vget_u8m1(src, 1);
+}
+
+vuint8m2_t
+test_vget_v_u8m4_u8m2 (vuint8m4_t src)
+{
+  return vget_u8m2(src, 1);
+}
+
+vuint8m2_t
+test_vget_v_u8m8_u8m2 (vuint8m8_t src)
+{
+  return vget_u8m2(src, 1);
+}
+
+vuint8m4_t
+test_vget_v_u8m8_u8m4 (vuint8m8_t src)
+{
+  return vget_u8m4(src, 1);
+}
+
+vuint16m2_t
+test_vset_v_u16m1_u16m2 (vuint16m2_t dest, vuint16m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint16m4_t
+test_vset_v_u16m1_u16m4 (vuint16m4_t dest, vuint16m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint16m4_t
+test_vset_v_u16m2_u16m4 (vuint16m4_t dest, vuint16m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint16m8_t
+test_vset_v_u16m1_u16m8 (vuint16m8_t dest, vuint16m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint16m8_t
+test_vset_v_u16m2_u16m8 (vuint16m8_t dest, vuint16m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint16m8_t
+test_vset_v_u16m4_u16m8 (vuint16m8_t dest, vuint16m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint16m1_t
+test_vget_v_u16m2_u16m1 (vuint16m2_t src)
+{
+  return vget_u16m1(src, 1);
+}
+
+vuint16m1_t
+test_vget_v_u16m4_u16m1 (vuint16m4_t src)
+{
+  return vget_u16m1(src, 1);
+}
+
+vuint16m1_t
+test_vget_v_u16m8_u16m1 (vuint16m8_t src)
+{
+  return vget_u16m1(src, 1);
+}
+
+vuint16m2_t
+test_vget_v_u16m4_u16m2 (vuint16m4_t src)
+{
+  return vget_u16m2(src, 1);
+}
+
+vuint16m2_t
+test_vget_v_u16m8_u16m2 (vuint16m8_t src)
+{
+  return vget_u16m2(src, 1);
+}
+
+vuint16m4_t
+test_vget_v_u16m8_u16m4 (vuint16m8_t src)
+{
+  return vget_u16m4(src, 1);
+}
+
+vuint32m2_t
+test_vset_v_u32m1_u32m2 (vuint32m2_t dest, vuint32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint32m4_t
+test_vset_v_u32m1_u32m4 (vuint32m4_t dest, vuint32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint32m4_t
+test_vset_v_u32m2_u32m4 (vuint32m4_t dest, vuint32m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint32m8_t
+test_vset_v_u32m1_u32m8 (vuint32m8_t dest, vuint32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint32m8_t
+test_vset_v_u32m2_u32m8 (vuint32m8_t dest, vuint32m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint32m8_t
+test_vset_v_u32m4_u32m8 (vuint32m8_t dest, vuint32m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint32m1_t
+test_vget_v_u32m2_u32m1 (vuint32m2_t src)
+{
+  return vget_u32m1(src, 1);
+}
+
+vuint32m1_t
+test_vget_v_u32m4_u32m1 (vuint32m4_t src)
+{
+  return vget_u32m1(src, 1);
+}
+
+vuint32m1_t
+test_vget_v_u32m8_u32m1 (vuint32m8_t src)
+{
+  return vget_u32m1(src, 1);
+}
+
+vuint32m2_t
+test_vget_v_u32m4_u32m2 (vuint32m4_t src)
+{
+  return vget_u32m2(src, 1);
+}
+
+vuint32m2_t
+test_vget_v_u32m8_u32m2 (vuint32m8_t src)
+{
+  return vget_u32m2(src, 1);
+}
+
+vuint32m4_t
+test_vget_v_u32m8_u32m4 (vuint32m8_t src)
+{
+  return vget_u32m4(src, 1);
+}
+
+vuint64m2_t
+test_vset_v_u64m1_u64m2 (vuint64m2_t dest, vuint64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint64m4_t
+test_vset_v_u64m1_u64m4 (vuint64m4_t dest, vuint64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint64m4_t
+test_vset_v_u64m2_u64m4 (vuint64m4_t dest, vuint64m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint64m8_t
+test_vset_v_u64m1_u64m8 (vuint64m8_t dest, vuint64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint64m8_t
+test_vset_v_u64m2_u64m8 (vuint64m8_t dest, vuint64m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint64m8_t
+test_vset_v_u64m4_u64m8 (vuint64m8_t dest, vuint64m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vuint64m1_t
+test_vget_v_u64m2_u64m1 (vuint64m2_t src)
+{
+  return vget_u64m1(src, 1);
+}
+
+vuint64m1_t
+test_vget_v_u64m4_u64m1 (vuint64m4_t src)
+{
+  return vget_u64m1(src, 1);
+}
+
+vuint64m1_t
+test_vget_v_u64m8_u64m1 (vuint64m8_t src)
+{
+  return vget_u64m1(src, 1);
+}
+
+vuint64m2_t
+test_vget_v_u64m4_u64m2 (vuint64m4_t src)
+{
+  return vget_u64m2(src, 1);
+}
+
+vuint64m2_t
+test_vget_v_u64m8_u64m2 (vuint64m8_t src)
+{
+  return vget_u64m2(src, 1);
+}
+
+vuint64m4_t
+test_vget_v_u64m8_u64m4 (vuint64m8_t src)
+{
+  return vget_u64m4(src, 1);
+}
+
+vfloat32m2_t
+test_vset_v_f32m1_f32m2 (vfloat32m2_t dest, vfloat32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat32m4_t
+test_vset_v_f32m1_f32m4 (vfloat32m4_t dest, vfloat32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat32m4_t
+test_vset_v_f32m2_f32m4 (vfloat32m4_t dest, vfloat32m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat32m8_t
+test_vset_v_f32m1_f32m8 (vfloat32m8_t dest, vfloat32m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat32m8_t
+test_vset_v_f32m2_f32m8 (vfloat32m8_t dest, vfloat32m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat32m8_t
+test_vset_v_f32m4_f32m8 (vfloat32m8_t dest, vfloat32m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat32m1_t
+test_vget_v_f32m2_f32m1 (vfloat32m2_t src)
+{
+  return vget_f32m1(src, 1);
+}
+
+vfloat32m1_t
+test_vget_v_f32m4_f32m1 (vfloat32m4_t src)
+{
+  return vget_f32m1(src, 1);
+}
+
+vfloat32m1_t
+test_vget_v_f32m8_f32m1 (vfloat32m8_t src)
+{
+  return vget_f32m1(src, 1);
+}
+
+vfloat32m2_t
+test_vget_v_f32m4_f32m2 (vfloat32m4_t src)
+{
+  return vget_f32m2(src, 1);
+}
+
+vfloat32m2_t
+test_vget_v_f32m8_f32m2 (vfloat32m8_t src)
+{
+  return vget_f32m2(src, 1);
+}
+
+vfloat32m4_t
+test_vget_v_f32m8_f32m4 (vfloat32m8_t src)
+{
+  return vget_f32m4(src, 1);
+}
+
+vfloat64m2_t
+test_vset_v_f64m1_f64m2 (vfloat64m2_t dest, vfloat64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat64m4_t
+test_vset_v_f64m1_f64m4 (vfloat64m4_t dest, vfloat64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat64m4_t
+test_vset_v_f64m2_f64m4 (vfloat64m4_t dest, vfloat64m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat64m8_t
+test_vset_v_f64m1_f64m8 (vfloat64m8_t dest, vfloat64m1_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat64m8_t
+test_vset_v_f64m2_f64m8 (vfloat64m8_t dest, vfloat64m2_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat64m8_t
+test_vset_v_f64m4_f64m8 (vfloat64m8_t dest, vfloat64m4_t val)
+{
+  return vset(dest, 1, val);
+}
+
+vfloat64m1_t
+test_vget_v_f64m2_f64m1 (vfloat64m2_t src)
+{
+  return vget_f64m1(src, 1);
+}
+
+vfloat64m1_t
+test_vget_v_f64m4_f64m1 (vfloat64m4_t src)
+{
+  return vget_f64m1(src, 1);
+}
+
+vfloat64m1_t
+test_vget_v_f64m8_f64m1 (vfloat64m8_t src)
+{
+  return vget_f64m1(src, 1);
+}
+
+vfloat64m2_t
+test_vget_v_f64m4_f64m2 (vfloat64m4_t src)
+{
+  return vget_f64m2(src, 1);
+}
+
+vfloat64m2_t
+test_vget_v_f64m8_f64m2 (vfloat64m8_t src)
+{
+  return vget_f64m2(src, 1);
+}
+
+vfloat64m4_t
+test_vget_v_f64m8_f64m4 (vfloat64m8_t src)
+{
+  return vget_f64m4(src, 1);
+}
+/* { dg-final { scan-assembler-times {vmv1r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),\s*(?:v[0-9]|v[1-2][0-9]|v3[0-1])} 60 } } */
+/* { dg-final { scan-assembler-times {vmv2r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),\s*(?:v[0-9]|v[1-2][0-9]|v3[0-1])} 40 } } */
+/* { dg-final { scan-assembler-times {vmv4r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),\s*(?:v[0-9]|v[1-2][0-9]|v3[0-1])} 20 } } */
+
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/set-get.c b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/set-get.c
new file mode 100644
index 00000000000..33d5a129aae
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/intrinsic/set-get.c
@@ -0,0 +1,730 @@
+
+/* { dg-do compile } */
+/* { dg-skip-if "test vector intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */
+
+#include <stddef.h>
+#include <riscv_vector.h>
+
+
+vint8m2_t
+test_vset_v_i8m1_i8m2 (vint8m2_t dest, vint8m1_t val)
+{
+  return vset_v_i8m1_i8m2(dest, 1, val);
+}
+
+vint8m4_t
+test_vset_v_i8m1_i8m4 (vint8m4_t dest, vint8m1_t val)
+{
+  return vset_v_i8m1_i8m4(dest, 1, val);
+}
+
+vint8m4_t
+test_vset_v_i8m2_i8m4 (vint8m4_t dest, vint8m2_t val)
+{
+  return vset_v_i8m2_i8m4(dest, 1, val);
+}
+
+vint8m8_t
+test_vset_v_i8m1_i8m8 (vint8m8_t dest, vint8m1_t val)
+{
+  return vset_v_i8m1_i8m8(dest, 1, val);
+}
+
+vint8m8_t
+test_vset_v_i8m2_i8m8 (vint8m8_t dest, vint8m2_t val)
+{
+  return vset_v_i8m2_i8m8(dest, 1, val);
+}
+
+vint8m8_t
+test_vset_v_i8m4_i8m8 (vint8m8_t dest, vint8m4_t val)
+{
+  return vset_v_i8m4_i8m8(dest, 1, val);
+}
+
+vint8m1_t
+test_vget_v_i8m2_i8m1 (vint8m2_t src)
+{
+  return vget_v_i8m2_i8m1(src, 1);
+}
+
+vint8m1_t
+test_vget_v_i8m4_i8m1 (vint8m4_t src)
+{
+  return vget_v_i8m4_i8m1(src, 1);
+}
+
+vint8m1_t
+test_vget_v_i8m8_i8m1 (vint8m8_t src)
+{
+  return vget_v_i8m8_i8m1(src, 1);
+}
+
+vint8m2_t
+test_vget_v_i8m4_i8m2 (vint8m4_t src)
+{
+  return vget_v_i8m4_i8m2(src, 1);
+}
+
+vint8m2_t
+test_vget_v_i8m8_i8m2 (vint8m8_t src)
+{
+  return vget_v_i8m8_i8m2(src, 1);
+}
+
+vint8m4_t
+test_vget_v_i8m8_i8m4 (vint8m8_t src)
+{
+  return vget_v_i8m8_i8m4(src, 1);
+}
+
+vint16m2_t
+test_vset_v_i16m1_i16m2 (vint16m2_t dest, vint16m1_t val)
+{
+  return vset_v_i16m1_i16m2(dest, 1, val);
+}
+
+vint16m4_t
+test_vset_v_i16m1_i16m4 (vint16m4_t dest, vint16m1_t val)
+{
+  return vset_v_i16m1_i16m4(dest, 1, val);
+}
+
+vint16m4_t
+test_vset_v_i16m2_i16m4 (vint16m4_t dest, vint16m2_t val)
+{
+  return vset_v_i16m2_i16m4(dest, 1, val);
+}
+
+vint16m8_t
+test_vset_v_i16m1_i16m8 (vint16m8_t dest, vint16m1_t val)
+{
+  return vset_v_i16m1_i16m8(dest, 1, val);
+}
+
+vint16m8_t
+test_vset_v_i16m2_i16m8 (vint16m8_t dest, vint16m2_t val)
+{
+  return vset_v_i16m2_i16m8(dest, 1, val);
+}
+
+vint16m8_t
+test_vset_v_i16m4_i16m8 (vint16m8_t dest, vint16m4_t val)
+{
+  return vset_v_i16m4_i16m8(dest, 1, val);
+}
+
+vint16m1_t
+test_vget_v_i16m2_i16m1 (vint16m2_t src)
+{
+  return vget_v_i16m2_i16m1(src, 1);
+}
+
+vint16m1_t
+test_vget_v_i16m4_i16m1 (vint16m4_t src)
+{
+  return vget_v_i16m4_i16m1(src, 1);
+}
+
+vint16m1_t
+test_vget_v_i16m8_i16m1 (vint16m8_t src)
+{
+  return vget_v_i16m8_i16m1(src, 1);
+}
+
+vint16m2_t
+test_vget_v_i16m4_i16m2 (vint16m4_t src)
+{
+  return vget_v_i16m4_i16m2(src, 1);
+}
+
+vint16m2_t
+test_vget_v_i16m8_i16m2 (vint16m8_t src)
+{
+  return vget_v_i16m8_i16m2(src, 1);
+}
+
+vint16m4_t
+test_vget_v_i16m8_i16m4 (vint16m8_t src)
+{
+  return vget_v_i16m8_i16m4(src, 1);
+}
+
+vint32m2_t
+test_vset_v_i32m1_i32m2 (vint32m2_t dest, vint32m1_t val)
+{
+  return vset_v_i32m1_i32m2(dest, 1, val);
+}
+
+vint32m4_t
+test_vset_v_i32m1_i32m4 (vint32m4_t dest, vint32m1_t val)
+{
+  return vset_v_i32m1_i32m4(dest, 1, val);
+}
+
+vint32m4_t
+test_vset_v_i32m2_i32m4 (vint32m4_t dest, vint32m2_t val)
+{
+  return vset_v_i32m2_i32m4(dest, 1, val);
+}
+
+vint32m8_t
+test_vset_v_i32m1_i32m8 (vint32m8_t dest, vint32m1_t val)
+{
+  return vset_v_i32m1_i32m8(dest, 1, val);
+}
+
+vint32m8_t
+test_vset_v_i32m2_i32m8 (vint32m8_t dest, vint32m2_t val)
+{
+  return vset_v_i32m2_i32m8(dest, 1, val);
+}
+
+vint32m8_t
+test_vset_v_i32m4_i32m8 (vint32m8_t dest, vint32m4_t val)
+{
+  return vset_v_i32m4_i32m8(dest, 1, val);
+}
+
+vint32m1_t
+test_vget_v_i32m2_i32m1 (vint32m2_t src)
+{
+  return vget_v_i32m2_i32m1(src, 1);
+}
+
+vint32m1_t
+test_vget_v_i32m4_i32m1 (vint32m4_t src)
+{
+  return vget_v_i32m4_i32m1(src, 1);
+}
+
+vint32m1_t
+test_vget_v_i32m8_i32m1 (vint32m8_t src)
+{
+  return vget_v_i32m8_i32m1(src, 1);
+}
+
+vint32m2_t
+test_vget_v_i32m4_i32m2 (vint32m4_t src)
+{
+  return vget_v_i32m4_i32m2(src, 1);
+}
+
+vint32m2_t
+test_vget_v_i32m8_i32m2 (vint32m8_t src)
+{
+  return vget_v_i32m8_i32m2(src, 1);
+}
+
+vint32m4_t
+test_vget_v_i32m8_i32m4 (vint32m8_t src)
+{
+  return vget_v_i32m8_i32m4(src, 1);
+}
+
+vint64m2_t
+test_vset_v_i64m1_i64m2 (vint64m2_t dest, vint64m1_t val)
+{
+  return vset_v_i64m1_i64m2(dest, 1, val);
+}
+
+vint64m4_t
+test_vset_v_i64m1_i64m4 (vint64m4_t dest, vint64m1_t val)
+{
+  return vset_v_i64m1_i64m4(dest, 1, val);
+}
+
+vint64m4_t
+test_vset_v_i64m2_i64m4 (vint64m4_t dest, vint64m2_t val)
+{
+  return vset_v_i64m2_i64m4(dest, 1, val);
+}
+
+vint64m8_t
+test_vset_v_i64m1_i64m8 (vint64m8_t dest, vint64m1_t val)
+{
+  return vset_v_i64m1_i64m8(dest, 1, val);
+}
+
+vint64m8_t
+test_vset_v_i64m2_i64m8 (vint64m8_t dest, vint64m2_t val)
+{
+  return vset_v_i64m2_i64m8(dest, 1, val);
+}
+
+vint64m8_t
+test_vset_v_i64m4_i64m8 (vint64m8_t dest, vint64m4_t val)
+{
+  return vset_v_i64m4_i64m8(dest, 1, val);
+}
+
+vint64m1_t
+test_vget_v_i64m2_i64m1 (vint64m2_t src)
+{
+  return vget_v_i64m2_i64m1(src, 1);
+}
+
+vint64m1_t
+test_vget_v_i64m4_i64m1 (vint64m4_t src)
+{
+  return vget_v_i64m4_i64m1(src, 1);
+}
+
+vint64m1_t
+test_vget_v_i64m8_i64m1 (vint64m8_t src)
+{
+  return vget_v_i64m8_i64m1(src, 1);
+}
+
+vint64m2_t
+test_vget_v_i64m4_i64m2 (vint64m4_t src)
+{
+  return vget_v_i64m4_i64m2(src, 1);
+}
+
+vint64m2_t
+test_vget_v_i64m8_i64m2 (vint64m8_t src)
+{
+  return vget_v_i64m8_i64m2(src, 1);
+}
+
+vint64m4_t
+test_vget_v_i64m8_i64m4 (vint64m8_t src)
+{
+  return vget_v_i64m8_i64m4(src, 1);
+}
+
+vuint8m2_t
+test_vset_v_u8m1_u8m2 (vuint8m2_t dest, vuint8m1_t val)
+{
+  return vset_v_u8m1_u8m2(dest, 1, val);
+}
+
+vuint8m4_t
+test_vset_v_u8m1_u8m4 (vuint8m4_t dest, vuint8m1_t val)
+{
+  return vset_v_u8m1_u8m4(dest, 1, val);
+}
+
+vuint8m4_t
+test_vset_v_u8m2_u8m4 (vuint8m4_t dest, vuint8m2_t val)
+{
+  return vset_v_u8m2_u8m4(dest, 1, val);
+}
+
+vuint8m8_t
+test_vset_v_u8m1_u8m8 (vuint8m8_t dest, vuint8m1_t val)
+{
+  return vset_v_u8m1_u8m8(dest, 1, val);
+}
+
+vuint8m8_t
+test_vset_v_u8m2_u8m8 (vuint8m8_t dest, vuint8m2_t val)
+{
+  return vset_v_u8m2_u8m8(dest, 1, val);
+}
+
+vuint8m8_t
+test_vset_v_u8m4_u8m8 (vuint8m8_t dest, vuint8m4_t val)
+{
+  return vset_v_u8m4_u8m8(dest, 1, val);
+}
+
+vuint8m1_t
+test_vget_v_u8m2_u8m1 (vuint8m2_t src)
+{
+  return vget_v_u8m2_u8m1(src, 1);
+}
+
+vuint8m1_t
+test_vget_v_u8m4_u8m1 (vuint8m4_t src)
+{
+  return vget_v_u8m4_u8m1(src, 1);
+}
+
+vuint8m1_t
+test_vget_v_u8m8_u8m1 (vuint8m8_t src)
+{
+  return vget_v_u8m8_u8m1(src, 1);
+}
+
+vuint8m2_t
+test_vget_v_u8m4_u8m2 (vuint8m4_t src)
+{
+  return vget_v_u8m4_u8m2(src, 1);
+}
+
+vuint8m2_t
+test_vget_v_u8m8_u8m2 (vuint8m8_t src)
+{
+  return vget_v_u8m8_u8m2(src, 1);
+}
+
+vuint8m4_t
+test_vget_v_u8m8_u8m4 (vuint8m8_t src)
+{
+  return vget_v_u8m8_u8m4(src, 1);
+}
+
+vuint16m2_t
+test_vset_v_u16m1_u16m2 (vuint16m2_t dest, vuint16m1_t val)
+{
+  return vset_v_u16m1_u16m2(dest, 1, val);
+}
+
+vuint16m4_t
+test_vset_v_u16m1_u16m4 (vuint16m4_t dest, vuint16m1_t val)
+{
+  return vset_v_u16m1_u16m4(dest, 1, val);
+}
+
+vuint16m4_t
+test_vset_v_u16m2_u16m4 (vuint16m4_t dest, vuint16m2_t val)
+{
+  return vset_v_u16m2_u16m4(dest, 1, val);
+}
+
+vuint16m8_t
+test_vset_v_u16m1_u16m8 (vuint16m8_t dest, vuint16m1_t val)
+{
+  return vset_v_u16m1_u16m8(dest, 1, val);
+}
+
+vuint16m8_t
+test_vset_v_u16m2_u16m8 (vuint16m8_t dest, vuint16m2_t val)
+{
+  return vset_v_u16m2_u16m8(dest, 1, val);
+}
+
+vuint16m8_t
+test_vset_v_u16m4_u16m8 (vuint16m8_t dest, vuint16m4_t val)
+{
+  return vset_v_u16m4_u16m8(dest, 1, val);
+}
+
+vuint16m1_t
+test_vget_v_u16m2_u16m1 (vuint16m2_t src)
+{
+  return vget_v_u16m2_u16m1(src, 1);
+}
+
+vuint16m1_t
+test_vget_v_u16m4_u16m1 (vuint16m4_t src)
+{
+  return vget_v_u16m4_u16m1(src, 1);
+}
+
+vuint16m1_t
+test_vget_v_u16m8_u16m1 (vuint16m8_t src)
+{
+  return vget_v_u16m8_u16m1(src, 1);
+}
+
+vuint16m2_t
+test_vget_v_u16m4_u16m2 (vuint16m4_t src)
+{
+  return vget_v_u16m4_u16m2(src, 1);
+}
+
+vuint16m2_t
+test_vget_v_u16m8_u16m2 (vuint16m8_t src)
+{
+  return vget_v_u16m8_u16m2(src, 1);
+}
+
+vuint16m4_t
+test_vget_v_u16m8_u16m4 (vuint16m8_t src)
+{
+  return vget_v_u16m8_u16m4(src, 1);
+}
+
+vuint32m2_t
+test_vset_v_u32m1_u32m2 (vuint32m2_t dest, vuint32m1_t val)
+{
+  return vset_v_u32m1_u32m2(dest, 1, val);
+}
+
+vuint32m4_t
+test_vset_v_u32m1_u32m4 (vuint32m4_t dest, vuint32m1_t val)
+{
+  return vset_v_u32m1_u32m4(dest, 1, val);
+}
+
+vuint32m4_t
+test_vset_v_u32m2_u32m4 (vuint32m4_t dest, vuint32m2_t val)
+{
+  return vset_v_u32m2_u32m4(dest, 1, val);
+}
+
+vuint32m8_t
+test_vset_v_u32m1_u32m8 (vuint32m8_t dest, vuint32m1_t val)
+{
+  return vset_v_u32m1_u32m8(dest, 1, val);
+}
+
+vuint32m8_t
+test_vset_v_u32m2_u32m8 (vuint32m8_t dest, vuint32m2_t val)
+{
+  return vset_v_u32m2_u32m8(dest, 1, val);
+}
+
+vuint32m8_t
+test_vset_v_u32m4_u32m8 (vuint32m8_t dest, vuint32m4_t val)
+{
+  return vset_v_u32m4_u32m8(dest, 1, val);
+}
+
+vuint32m1_t
+test_vget_v_u32m2_u32m1 (vuint32m2_t src)
+{
+  return vget_v_u32m2_u32m1(src, 1);
+}
+
+vuint32m1_t
+test_vget_v_u32m4_u32m1 (vuint32m4_t src)
+{
+  return vget_v_u32m4_u32m1(src, 1);
+}
+
+vuint32m1_t
+test_vget_v_u32m8_u32m1 (vuint32m8_t src)
+{
+  return vget_v_u32m8_u32m1(src, 1);
+}
+
+vuint32m2_t
+test_vget_v_u32m4_u32m2 (vuint32m4_t src)
+{
+  return vget_v_u32m4_u32m2(src, 1);
+}
+
+vuint32m2_t
+test_vget_v_u32m8_u32m2 (vuint32m8_t src)
+{
+  return vget_v_u32m8_u32m2(src, 1);
+}
+
+vuint32m4_t
+test_vget_v_u32m8_u32m4 (vuint32m8_t src)
+{
+  return vget_v_u32m8_u32m4(src, 1);
+}
+
+vuint64m2_t
+test_vset_v_u64m1_u64m2 (vuint64m2_t dest, vuint64m1_t val)
+{
+  return vset_v_u64m1_u64m2(dest, 1, val);
+}
+
+vuint64m4_t
+test_vset_v_u64m1_u64m4 (vuint64m4_t dest, vuint64m1_t val)
+{
+  return vset_v_u64m1_u64m4(dest, 1, val);
+}
+
+vuint64m4_t
+test_vset_v_u64m2_u64m4 (vuint64m4_t dest, vuint64m2_t val)
+{
+  return vset_v_u64m2_u64m4(dest, 1, val);
+}
+
+vuint64m8_t
+test_vset_v_u64m1_u64m8 (vuint64m8_t dest, vuint64m1_t val)
+{
+  return vset_v_u64m1_u64m8(dest, 1, val);
+}
+
+vuint64m8_t
+test_vset_v_u64m2_u64m8 (vuint64m8_t dest, vuint64m2_t val)
+{
+  return vset_v_u64m2_u64m8(dest, 1, val);
+}
+
+vuint64m8_t
+test_vset_v_u64m4_u64m8 (vuint64m8_t dest, vuint64m4_t val)
+{
+  return vset_v_u64m4_u64m8(dest, 1, val);
+}
+
+vuint64m1_t
+test_vget_v_u64m2_u64m1 (vuint64m2_t src)
+{
+  return vget_v_u64m2_u64m1(src, 1);
+}
+
+vuint64m1_t
+test_vget_v_u64m4_u64m1 (vuint64m4_t src)
+{
+  return vget_v_u64m4_u64m1(src, 1);
+}
+
+vuint64m1_t
+test_vget_v_u64m8_u64m1 (vuint64m8_t src)
+{
+  return vget_v_u64m8_u64m1(src, 1);
+}
+
+vuint64m2_t
+test_vget_v_u64m4_u64m2 (vuint64m4_t src)
+{
+  return vget_v_u64m4_u64m2(src, 1);
+}
+
+vuint64m2_t
+test_vget_v_u64m8_u64m2 (vuint64m8_t src)
+{
+  return vget_v_u64m8_u64m2(src, 1);
+}
+
+vuint64m4_t
+test_vget_v_u64m8_u64m4 (vuint64m8_t src)
+{
+  return vget_v_u64m8_u64m4(src, 1);
+}
+
+vfloat32m2_t
+test_vset_v_f32m1_f32m2 (vfloat32m2_t dest, vfloat32m1_t val)
+{
+  return vset_v_f32m1_f32m2(dest, 1, val);
+}
+
+vfloat32m4_t
+test_vset_v_f32m1_f32m4 (vfloat32m4_t dest, vfloat32m1_t val)
+{
+  return vset_v_f32m1_f32m4(dest, 1, val);
+}
+
+vfloat32m4_t
+test_vset_v_f32m2_f32m4 (vfloat32m4_t dest, vfloat32m2_t val)
+{
+  return vset_v_f32m2_f32m4(dest, 1, val);
+}
+
+vfloat32m8_t
+test_vset_v_f32m1_f32m8 (vfloat32m8_t dest, vfloat32m1_t val)
+{
+  return vset_v_f32m1_f32m8(dest, 1, val);
+}
+
+vfloat32m8_t
+test_vset_v_f32m2_f32m8 (vfloat32m8_t dest, vfloat32m2_t val)
+{
+  return vset_v_f32m2_f32m8(dest, 1, val);
+}
+
+vfloat32m8_t
+test_vset_v_f32m4_f32m8 (vfloat32m8_t dest, vfloat32m4_t val)
+{
+  return vset_v_f32m4_f32m8(dest, 1, val);
+}
+
+vfloat32m1_t
+test_vget_v_f32m2_f32m1 (vfloat32m2_t src)
+{
+  return vget_v_f32m2_f32m1(src, 1);
+}
+
+vfloat32m1_t
+test_vget_v_f32m4_f32m1 (vfloat32m4_t src)
+{
+  return vget_v_f32m4_f32m1(src, 1);
+}
+
+vfloat32m1_t
+test_vget_v_f32m8_f32m1 (vfloat32m8_t src)
+{
+  return vget_v_f32m8_f32m1(src, 1);
+}
+
+vfloat32m2_t
+test_vget_v_f32m4_f32m2 (vfloat32m4_t src)
+{
+  return vget_v_f32m4_f32m2(src, 1);
+}
+
+vfloat32m2_t
+test_vget_v_f32m8_f32m2 (vfloat32m8_t src)
+{
+  return vget_v_f32m8_f32m2(src, 1);
+}
+
+vfloat32m4_t
+test_vget_v_f32m8_f32m4 (vfloat32m8_t src)
+{
+  return vget_v_f32m8_f32m4(src, 1);
+}
+
+vfloat64m2_t
+test_vset_v_f64m1_f64m2 (vfloat64m2_t dest, vfloat64m1_t val)
+{
+  return vset_v_f64m1_f64m2(dest, 1, val);
+}
+
+vfloat64m4_t
+test_vset_v_f64m1_f64m4 (vfloat64m4_t dest, vfloat64m1_t val)
+{
+  return vset_v_f64m1_f64m4(dest, 1, val);
+}
+
+vfloat64m4_t
+test_vset_v_f64m2_f64m4 (vfloat64m4_t dest, vfloat64m2_t val)
+{
+  return vset_v_f64m2_f64m4(dest, 1, val);
+}
+
+vfloat64m8_t
+test_vset_v_f64m1_f64m8 (vfloat64m8_t dest, vfloat64m1_t val)
+{
+  return vset_v_f64m1_f64m8(dest, 1, val);
+}
+
+vfloat64m8_t
+test_vset_v_f64m2_f64m8 (vfloat64m8_t dest, vfloat64m2_t val)
+{
+  return vset_v_f64m2_f64m8(dest, 1, val);
+}
+
+vfloat64m8_t
+test_vset_v_f64m4_f64m8 (vfloat64m8_t dest, vfloat64m4_t val)
+{
+  return vset_v_f64m4_f64m8(dest, 1, val);
+}
+
+vfloat64m1_t
+test_vget_v_f64m2_f64m1 (vfloat64m2_t src)
+{
+  return vget_v_f64m2_f64m1(src, 1);
+}
+
+vfloat64m1_t
+test_vget_v_f64m4_f64m1 (vfloat64m4_t src)
+{
+  return vget_v_f64m4_f64m1(src, 1);
+}
+
+vfloat64m1_t
+test_vget_v_f64m8_f64m1 (vfloat64m8_t src)
+{
+  return vget_v_f64m8_f64m1(src, 1);
+}
+
+vfloat64m2_t
+test_vget_v_f64m4_f64m2 (vfloat64m4_t src)
+{
+  return vget_v_f64m4_f64m2(src, 1);
+}
+
+vfloat64m2_t
+test_vget_v_f64m8_f64m2 (vfloat64m8_t src)
+{
+  return vget_v_f64m8_f64m2(src, 1);
+}
+
+vfloat64m4_t
+test_vget_v_f64m8_f64m4 (vfloat64m8_t src)
+{
+  return vget_v_f64m8_f64m4(src, 1);
+}
+/* { dg-final { scan-assembler-times {vmv1r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),\s*(?:v[0-9]|v[1-2][0-9]|v3[0-1])} 60 } } */
+/* { dg-final { scan-assembler-times {vmv2r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),\s*(?:v[0-9]|v[1-2][0-9]|v3[0-1])} 40 } } */
+/* { dg-final { scan-assembler-times {vmv4r\.v\s+(?:v[0-9]|v[1-2][0-9]|v3[0-1]),\s*(?:v[0-9]|v[1-2][0-9]|v3[0-1])} 20 } } */
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 13/21]   Adjust scalable frame and full testcases
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (10 preceding siblings ...)
  2022-05-31  8:50 ` [PATCH 12/21] Add set get intrinsic support juzhe.zhong
@ 2022-05-31  8:50 ` juzhe.zhong
  2022-05-31  8:50 ` [PATCH 15/21] Add integer intrinsics juzhe.zhong
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:50 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-vector.cc (rvv_adjust_frame): Adjust frame manipulation for RVV scalable vector.
        * config/riscv/riscv-vector.h (rvv_adjust_frame): Adjust frame manipulation for RVV scalable vector.
        * config/riscv/riscv.cc (riscv_compute_frame_info): Adjust frame manipulation for RVV scalable vector.
        (riscv_first_stack_step): Adjust frame manipulation for RVV scalable vector.
        (riscv_expand_prologue): Adjust frame manipulation for RVV scalable vector.
        (riscv_expand_epilogue): Adjust frame manipulation for RVV scalable vector.
        (riscv_dwarf_poly_indeterminate_value): New function.
        (riscv_estimated_poly_value): New function.
        (TARGET_DWARF_POLY_INDETERMINATE_VALUE): New targethook.
        (TARGET_ESTIMATED_POLY_VALUE): New targethook.
        * config/riscv/riscv.h (RISCV_PROLOGUE_TEMP2_REGNUM): New macro define.
        (RISCV_PROLOGUE_TEMP2): New macro define.
        (RISCV_DWARF_VLENB): New macro define.

gcc/testsuite/ChangeLog:

        * gcc.target/riscv/rvv/stack/rvv-stack.exp: New test.
        * gcc.target/riscv/rvv/stack/stack-check-alloca-scalar.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-alloca-vector.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-save-restore-scalar.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-save-restore-vector.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-scalar.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-vararg-scalar.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-vector_1.c: New test.
        * gcc.target/riscv/rvv/stack/stack-check-vector_2.c: New test.
        
---
 gcc/config/riscv/riscv-vector.cc              |  33 +++
 gcc/config/riscv/riscv-vector.h               |   1 +
 gcc/config/riscv/riscv.cc                     | 275 ++++++++++++-----
 gcc/config/riscv/riscv.h                      |   4 +
 .../gcc.target/riscv/rvv/stack/rvv-stack.exp  |  47 +++
 .../rvv/stack/stack-check-alloca-scalar.c     |  53 ++++
 .../rvv/stack/stack-check-alloca-vector.c     |  45 +++
 .../stack/stack-check-save-restore-scalar.c   |  48 +++
 .../stack/stack-check-save-restore-vector.c   |  62 ++++
 .../riscv/rvv/stack/stack-check-scalar.c      | 205 +++++++++++++
 .../rvv/stack/stack-check-vararg-scalar.c     |  33 +++
 .../riscv/rvv/stack/stack-check-vector_1.c    | 277 ++++++++++++++++++
 .../riscv/rvv/stack/stack-check-vector_2.c    | 141 +++++++++
 13 files changed, 1143 insertions(+), 81 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/rvv-stack.exp
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-vector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-vector.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vararg-scalar.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_1.c
 create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_2.c

diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc
index d09fc1b8e49..4cb5e79421d 100644
--- a/gcc/config/riscv/riscv-vector.cc
+++ b/gcc/config/riscv/riscv-vector.cc
@@ -846,6 +846,39 @@ rvv_expand_poly_move (machine_mode mode, rtx dest, rtx clobber, rtx src)
     emit_insn (gen_rtx_SET (dest, riscv_add_offset (clobber, dest, constant)));
 }
 
+/* Adjust frame of vector for prologue && epilogue. */
+void
+rvv_adjust_frame (rtx target, poly_int64 offset, bool epilogue)
+{
+  rtx clobber = RISCV_PROLOGUE_TEMP (Pmode);
+  rtx space = RISCV_PROLOGUE_TEMP2 (Pmode);
+  rtx insn, dwarf, adjust_frame_rtx;
+
+  rvv_expand_poly_move (Pmode, space, clobber, gen_int_mode (offset, Pmode));
+
+  if (epilogue)
+    {
+      insn = gen_add3_insn (target, target, space);
+    }
+  else
+    {
+      insn = gen_sub3_insn (target, target, space);
+    }
+
+  insn = emit_insn (insn);
+
+  RTX_FRAME_RELATED_P (insn) = 1;
+
+  adjust_frame_rtx =
+    gen_rtx_SET (target,
+                 plus_constant (Pmode, target, epilogue ? offset : -offset));
+
+  dwarf = alloc_reg_note (REG_FRAME_RELATED_EXPR,
+                          copy_rtx (adjust_frame_rtx), NULL_RTX);
+
+  REG_NOTES (insn) = dwarf;
+}
+
 /* Helper functions for handling sew=64 on RV32 system. */
 bool
 imm32_p (rtx a)
diff --git a/gcc/config/riscv/riscv-vector.h b/gcc/config/riscv/riscv-vector.h
index b70cf676e26..98f47ea0ec1 100644
--- a/gcc/config/riscv/riscv-vector.h
+++ b/gcc/config/riscv/riscv-vector.h
@@ -22,4 +22,5 @@
 #define GCC_RISCV_VECTOR_H
 void rvv_report_required (void);
 void rvv_expand_poly_move (machine_mode, rtx, rtx, rtx);
+void rvv_adjust_frame (rtx, poly_int64, bool);
 #endif // GCC_RISCV_VECTOR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 832c1754002..29106bbf6fe 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -4357,7 +4357,11 @@ riscv_compute_frame_info (void)
      padding.  */
   frame->arg_pointer_offset = offset - crtl->args.pretend_args_size;
   frame->total_size = offset;
-                        
+  
+  /* Calculate the constant offset of a scalable frame. We Handle the constant
+     and scalable part of frame seperatly. */
+  frame->constant_offset = riscv_stack_align (frame->total_size.coeffs[0]) -
+                           riscv_stack_align (frame->total_size.coeffs[1]);              
   /* Next points the incoming stack pointer and any incoming arguments. */
 
   /* Only use save/restore routines when the GPRs are atop the frame.  */
@@ -4538,21 +4542,27 @@ riscv_restore_reg (rtx reg, rtx mem)
 static HOST_WIDE_INT
 riscv_first_stack_step (struct riscv_frame_info *frame)
 {
-  if (SMALL_OPERAND (frame->total_size.to_constant()))
-    return frame->total_size.to_constant ();
+  HOST_WIDE_INT frame_total_size;
+  if (!frame->total_size.is_constant())
+    frame_total_size = frame->constant_offset;
+  else
+    frame_total_size = frame->total_size.to_constant();
+
+  if (SMALL_OPERAND (frame_total_size))
+    return frame_total_size;
 
   HOST_WIDE_INT min_first_step =
     RISCV_STACK_ALIGN ((frame->total_size - frame->fp_sp_offset).to_constant());
   HOST_WIDE_INT max_first_step = IMM_REACH / 2 - PREFERRED_STACK_BOUNDARY / 8;
-  HOST_WIDE_INT min_second_step = frame->total_size.to_constant() - max_first_step;
+  HOST_WIDE_INT min_second_step = frame_total_size - max_first_step;
   gcc_assert (min_first_step <= max_first_step);
 
   /* As an optimization, use the least-significant bits of the total frame
      size, so that the second adjustment step is just LUI + ADD.  */
   if (!SMALL_OPERAND (min_second_step)
-      && frame->total_size.to_constant() % IMM_REACH < IMM_REACH / 2
-      && frame->total_size.to_constant() % IMM_REACH >= min_first_step)
-    return frame->total_size.to_constant() % IMM_REACH;
+      && frame_total_size % IMM_REACH < IMM_REACH / 2
+      && frame_total_size % IMM_REACH >= min_first_step)
+    return frame_total_size % IMM_REACH;
 
   if (TARGET_RVC)
     {
@@ -4625,12 +4635,12 @@ void
 riscv_expand_prologue (void)
 {
   struct riscv_frame_info *frame = &cfun->machine->frame;
-  HOST_WIDE_INT size = frame->total_size.to_constant ();
+  poly_int64 size = frame->total_size;
   unsigned mask = frame->mask;
   rtx insn;
 
   if (flag_stack_usage_info)
-    current_function_static_stack_size = size;
+    current_function_static_stack_size = constant_lower_bound (size);
 
   if (cfun->machine->naked_p)
     return;
@@ -4640,7 +4650,6 @@ riscv_expand_prologue (void)
     {
       rtx dwarf = NULL_RTX;
       dwarf = riscv_adjust_libcall_cfi_prologue ();
-
       size -= frame->save_libcall_adjustment;
       insn = emit_insn (riscv_gen_gpr_save_insn (frame));
       frame->mask = 0; /* Temporarily fib that we need not save GPRs.  */
@@ -4652,11 +4661,14 @@ riscv_expand_prologue (void)
   /* Save the registers.  */
   if ((frame->mask | frame->fmask) != 0)
     {
-      HOST_WIDE_INT step1 = MIN (size, riscv_first_stack_step (frame));
+      HOST_WIDE_INT step1 = riscv_first_stack_step (frame);
+      if (size.is_constant ())
+        step1 = MIN (size.to_constant(), step1);
 
+      gcc_assert (SMALL_OPERAND (-step1));
       insn = gen_add3_insn (stack_pointer_rtx,
-			    stack_pointer_rtx,
-			    GEN_INT (-step1));
+		            stack_pointer_rtx,
+		            GEN_INT (-step1));
       RTX_FRAME_RELATED_P (emit_insn (insn)) = 1;
       size -= step1;
       riscv_for_each_saved_reg (size, riscv_save_reg, false, false);
@@ -4667,34 +4679,56 @@ riscv_expand_prologue (void)
   /* Set up the frame pointer, if we're using one.  */
   if (frame_pointer_needed)
     {
+      poly_int64 offset = frame->hard_frame_pointer_offset - size;
       insn = gen_add3_insn (hard_frame_pointer_rtx, stack_pointer_rtx,
-			    GEN_INT ((frame->hard_frame_pointer_offset - size).to_constant ()));
+			    GEN_INT (offset.to_constant ()));
       RTX_FRAME_RELATED_P (emit_insn (insn)) = 1;
 
       riscv_emit_stack_tie ();
     }
 
   /* Allocate the rest of the frame.  */
-  if (size > 0)
+  if (known_gt (size, 0))
     {
-      if (SMALL_OPERAND (-size))
-	{
-	  insn = gen_add3_insn (stack_pointer_rtx, stack_pointer_rtx,
-				GEN_INT (-size));
-	  RTX_FRAME_RELATED_P (emit_insn (insn)) = 1;
-	}
-      else
-	{
-	  riscv_emit_move (RISCV_PROLOGUE_TEMP (Pmode), GEN_INT (-size));
-	  emit_insn (gen_add3_insn (stack_pointer_rtx,
-				    stack_pointer_rtx,
-				    RISCV_PROLOGUE_TEMP (Pmode)));
-
-	  /* Describe the effect of the previous instructions.  */
-	  insn = plus_constant (Pmode, stack_pointer_rtx, -size);
-	  insn = gen_rtx_SET (stack_pointer_rtx, insn);
-	  riscv_set_frame_expr (insn);
-	}
+      /* Two step adjustment, first for scalar frame, second for vector frame.  */
+      poly_int64 poly_offset (0, 0);
+      if (!size.is_constant ())
+        {
+          HOST_WIDE_INT factor = size.coeffs[1];
+          poly_offset.coeffs[0] = factor;
+          poly_offset.coeffs[1] = factor;
+          size -= poly_offset;
+        }
+
+      /* First step for scalar frame.  */
+      HOST_WIDE_INT size_value = size.to_constant ();
+      if (size_value > 0)
+        {
+          if (SMALL_OPERAND (-size_value))
+            {
+              insn = gen_add3_insn (stack_pointer_rtx, stack_pointer_rtx,
+                                    GEN_INT (-size_value));
+              RTX_FRAME_RELATED_P (emit_insn (insn)) = 1;
+            }
+          else
+            {
+              riscv_emit_move (RISCV_PROLOGUE_TEMP (Pmode), GEN_INT (size_value));
+              emit_insn (gen_sub3_insn (stack_pointer_rtx,
+                                        stack_pointer_rtx,
+                                        RISCV_PROLOGUE_TEMP (Pmode)));
+  
+              /* Describe the effect of the previous instructions.  */
+              insn = plus_constant (Pmode, stack_pointer_rtx, -size_value);
+              insn = gen_rtx_SET (stack_pointer_rtx, insn);
+              riscv_set_frame_expr (insn);
+            }
+        }
+
+      /* Second step for vector frame */
+      if (known_gt (poly_offset, 0))
+        {
+          rvv_adjust_frame (stack_pointer_rtx, poly_offset, false);
+        }
     }
 }
 
@@ -4734,7 +4768,8 @@ riscv_expand_epilogue (int style)
      Start off by assuming that no registers need to be restored.  */
   struct riscv_frame_info *frame = &cfun->machine->frame;
   unsigned mask = frame->mask;
-  HOST_WIDE_INT step1 = frame->total_size.to_constant ();
+  poly_int64 step1 = frame->total_size;
+  poly_int64 restore_offset; /* For restore register */
   HOST_WIDE_INT step2 = 0;
   bool use_restore_libcall = ((style == NORMAL_RETURN)
 			      && riscv_use_save_libcall (frame));
@@ -4742,8 +4777,8 @@ riscv_expand_epilogue (int style)
   rtx insn;
 
   /* We need to add memory barrier to prevent read from deallocated stack.  */
-  bool need_barrier_p = known_ne (get_frame_size (),
-			 cfun->machine->frame.arg_pointer_offset);
+  bool need_barrier_p = known_ne (get_frame_size ()
+                                  + cfun->machine->frame.arg_pointer_offset, 0);
 
   if (cfun->machine->naked_p)
     {
@@ -4763,6 +4798,21 @@ riscv_expand_epilogue (int style)
   /* Reset the epilogue cfa info before starting to emit the epilogue.  */
   epilogue_cfa_sp_offset = 0;
 
+  if (use_restore_libcall)
+    {
+      step1 -= frame->save_libcall_adjustment;
+      frame->mask = 0; /* Temporarily fib that we need not save GPRs.  */
+    }
+
+  /* If we need to restore registers, deallocate as much stack as
+     possible in the second step without going out of range.  */
+  if ((frame->mask | frame->fmask) != 0)
+    {
+      step2 = riscv_first_stack_step (frame);
+      step1 -= step2;
+      restore_offset = step1;
+    }
+
   /* Move past any dynamic stack allocations.  */
   if (cfun->calls_alloca)
     {
@@ -4770,21 +4820,18 @@ riscv_expand_epilogue (int style)
       riscv_emit_stack_tie ();
       need_barrier_p = false;
 
-      rtx adjust = GEN_INT (-frame->hard_frame_pointer_offset.to_constant ());
-      if (!SMALL_OPERAND (INTVAL (adjust)))
-	{
-	  riscv_emit_move (RISCV_PROLOGUE_TEMP (Pmode), adjust);
-	  adjust = RISCV_PROLOGUE_TEMP (Pmode);
-	}
-
-      insn = emit_insn (
-	       gen_add3_insn (stack_pointer_rtx, hard_frame_pointer_rtx,
-			      adjust));
+      gcc_assert (frame_pointer_needed);
+      poly_int64 offset = frame->hard_frame_pointer_offset - step1;
+      insn = emit_insn (gen_add3_insn (stack_pointer_rtx, hard_frame_pointer_rtx,
+			    GEN_INT (-offset.to_constant ())));
+      /* By using hard_frame_pointer_rtx, it can skip the adjust of step1
+         and go directly to the position of step2  */
+      step1 = 0;
 
       rtx dwarf = NULL_RTX;
       rtx cfa_adjust_value = gen_rtx_PLUS (
-			       Pmode, hard_frame_pointer_rtx,
-			       GEN_INT (-frame->hard_frame_pointer_offset.to_constant ()));
+                               Pmode, hard_frame_pointer_rtx,
+                               GEN_INT (-offset.to_constant ()));
       rtx cfa_adjust_rtx = gen_rtx_SET (stack_pointer_rtx, cfa_adjust_value);
       dwarf = alloc_reg_note (REG_CFA_ADJUST_CFA, cfa_adjust_rtx, dwarf);
       RTX_FRAME_RELATED_P (insn) = 1;
@@ -4792,14 +4839,6 @@ riscv_expand_epilogue (int style)
       REG_NOTES (insn) = dwarf;
     }
 
-  /* If we need to restore registers, deallocate as much stack as
-     possible in the second step without going out of range.  */
-  if ((frame->mask | frame->fmask) != 0)
-    {
-      step2 = riscv_first_stack_step (frame);
-      step1 -= step2;
-    }
-
   /* Set TARGET to BASE + STEP1.  */
   if (known_gt (step1, 0))
     {
@@ -4807,25 +4846,38 @@ riscv_expand_epilogue (int style)
       riscv_emit_stack_tie ();
       need_barrier_p = false;
 
-      /* Get an rtx for STEP1 that we can add to BASE.  */
-      rtx adjust = GEN_INT (step1);
-      if (!SMALL_OPERAND (step1))
-	{
-	  riscv_emit_move (RISCV_PROLOGUE_TEMP (Pmode), adjust);
-	  adjust = RISCV_PROLOGUE_TEMP (Pmode);
-	}
+      /* First step for vector frame */
+      if (!step1.is_constant ())
+        {
+          HOST_WIDE_INT factor = step1.coeffs[1];
+          poly_int64 poly_offset (factor, factor);
+          rvv_adjust_frame (stack_pointer_rtx, poly_offset, true);
+          step1 -= poly_offset;
+        }
 
-      insn = emit_insn (
-	       gen_add3_insn (stack_pointer_rtx, stack_pointer_rtx, adjust));
+      /* Second step for scalar frame.  */
+      HOST_WIDE_INT scalar_step1 = step1.to_constant ();
+      if (scalar_step1 > 0)
+        {
+          rtx adjust = GEN_INT (scalar_step1);
+          if (!SMALL_OPERAND (scalar_step1))
+            {
+              riscv_emit_move (RISCV_PROLOGUE_TEMP (Pmode), adjust);
+              adjust = RISCV_PROLOGUE_TEMP (Pmode);
+            }
 
-      rtx dwarf = NULL_RTX;
-      rtx cfa_adjust_rtx = gen_rtx_PLUS (Pmode, stack_pointer_rtx,
-					 GEN_INT (step2));
+          insn = emit_insn (
+                  gen_add3_insn (stack_pointer_rtx, stack_pointer_rtx, adjust));
 
-      dwarf = alloc_reg_note (REG_CFA_DEF_CFA, cfa_adjust_rtx, dwarf);
-      RTX_FRAME_RELATED_P (insn) = 1;
+          rtx dwarf = NULL_RTX;
+          rtx cfa_adjust_rtx = gen_rtx_PLUS (Pmode, stack_pointer_rtx,
+                                             GEN_INT (step2));
 
-      REG_NOTES (insn) = dwarf;
+          dwarf = alloc_reg_note (REG_CFA_DEF_CFA, cfa_adjust_rtx, dwarf);
+          RTX_FRAME_RELATED_P (insn) = 1;
+
+          REG_NOTES (insn) = dwarf;
+        }
     }
   else if (frame_pointer_needed)
     {
@@ -4834,18 +4886,11 @@ riscv_expand_epilogue (int style)
       epilogue_cfa_sp_offset = step2;
     }
 
-  if (use_restore_libcall)
-    frame->mask = 0; /* Temporarily fib that we need not save GPRs.  */
-
   /* Restore the registers.  */
-  riscv_for_each_saved_reg (frame->total_size - step2, riscv_restore_reg,
-			    true, style == EXCEPTION_RETURN);
-
-  if (use_restore_libcall)
+  if ((frame->mask | frame->fmask) != 0)
     {
-      frame->mask = mask; /* Undo the above fib.  */
-      gcc_assert (step2 >= frame->save_libcall_adjustment);
-      step2 -= frame->save_libcall_adjustment;
+      riscv_for_each_saved_reg (restore_offset, riscv_restore_reg,
+			    true, style == EXCEPTION_RETURN);
     }
 
   if (need_barrier_p)
@@ -4868,6 +4913,7 @@ riscv_expand_epilogue (int style)
 
   if (use_restore_libcall)
     {
+      frame->mask = mask; /* Undo the above fib.  */
       rtx dwarf = riscv_adjust_libcall_cfi_epilogue ();
       insn = emit_insn (gen_gpr_restore (GEN_INT (riscv_save_libcall_count (mask))));
       RTX_FRAME_RELATED_P (insn) = 1;
@@ -6118,6 +6164,67 @@ riscv_regmode_natural_size (machine_mode mode)
   return UNITS_PER_WORD;
 }
 
+/* Implement the TARGET_DWARF_POLY_INDETERMINATE_VALUE hook.  */
+
+static unsigned int
+riscv_dwarf_poly_indeterminate_value (unsigned int i, unsigned int *factor,
+                                      int *offset)
+{
+  /* Polynomial invariant 1 == (VLENB / 8) - 1.  */
+  gcc_assert (i == 1);
+  *factor = 8;
+  *offset = 1;
+  return RISCV_DWARF_VLENB;
+}
+
+/* Implement TARGET_ESTIMATED_POLY_VALUE.
+   Look into the tuning structure for an estimate.
+   KIND specifies the type of requested estimate: min, max or likely.
+   For cores with a known RVV width all three estimates are the same.
+   For generic RVV tuning we want to distinguish the maximum estimate from
+   the minimum and likely ones.
+   The likely estimate is the same as the minimum in that case to give a
+   conservative behavior of auto-vectorizing with RVV when it is a win
+   even for 128-bit RVV.
+   When RVV width information is available VAL.coeffs[1] is multiplied by
+   the number of VQ chunks over the initial Advanced SIMD 128 bits.  */
+
+static HOST_WIDE_INT
+riscv_estimated_poly_value (poly_int64 val,
+                            poly_value_estimate_kind kind = POLY_VALUE_LIKELY)
+{
+  unsigned int width_source =
+      BITS_PER_RISCV_VECTOR.is_constant ()
+          ? (unsigned int)BITS_PER_RISCV_VECTOR.to_constant ()
+          : (unsigned int)RVV_SCALABLE;
+
+  /* If there is no core-specific information then the minimum and likely
+     values are based on 128-bit vectors and the maximum is based on
+     the architectural maximum of 2048 bits.  */
+  if (width_source == RVV_SCALABLE)
+    switch (kind)
+      {
+      case POLY_VALUE_MIN:
+      case POLY_VALUE_LIKELY:
+        return val.coeffs[0];
+
+      case POLY_VALUE_MAX:
+        return val.coeffs[0] + val.coeffs[1] * 15;
+      }
+
+  /* Allow BITS_PER_RISCV_VECTOR to be a bitmask of different VL, treating the
+     lowest as likely.  This could be made more general if future -mtune
+     options need it to be.  */
+  if (kind == POLY_VALUE_MAX)
+    width_source = 1 << floor_log2 (width_source);
+  else
+    width_source = least_bit_hwi (width_source);
+
+  /* If the core provides width information, use that.  */
+  HOST_WIDE_INT over_128 = width_source - 128;
+  return val.coeffs[0] + val.coeffs[1] * over_128 / 128;
+}
+
 /* Initialize the GCC target structure.  */
 #undef TARGET_ASM_ALIGNED_HI_OP
 #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t"
@@ -6336,6 +6443,12 @@ riscv_regmode_natural_size (machine_mode mode)
 #undef TARGET_MANGLE_TYPE
 #define TARGET_MANGLE_TYPE riscv_mangle_type
 
+#undef TARGET_DWARF_POLY_INDETERMINATE_VALUE
+#define TARGET_DWARF_POLY_INDETERMINATE_VALUE riscv_dwarf_poly_indeterminate_value
+
+#undef TARGET_ESTIMATED_POLY_VALUE
+#define TARGET_ESTIMATED_POLY_VALUE riscv_estimated_poly_value
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-riscv.h"
diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h
index 5de745bc824..03eb92900be 100644
--- a/gcc/config/riscv/riscv.h
+++ b/gcc/config/riscv/riscv.h
@@ -396,6 +396,8 @@ ASM_MISA_SPEC
 
 #define RISCV_PROLOGUE_TEMP_REGNUM (GP_TEMP_FIRST)
 #define RISCV_PROLOGUE_TEMP(MODE) gen_rtx_REG (MODE, RISCV_PROLOGUE_TEMP_REGNUM)
+#define RISCV_PROLOGUE_TEMP2_REGNUM (GP_TEMP_FIRST + 1)
+#define RISCV_PROLOGUE_TEMP2(MODE) gen_rtx_REG (MODE, RISCV_PROLOGUE_TEMP2_REGNUM)
 
 #define RISCV_CALL_ADDRESS_TEMP_REGNUM (GP_TEMP_FIRST + 1)
 #define RISCV_CALL_ADDRESS_TEMP(MODE) \
@@ -1085,4 +1087,6 @@ extern void riscv_remove_unneeded_save_restore_calls (void);
 
 #define REGMODE_NATURAL_SIZE(MODE) riscv_regmode_natural_size (MODE)
 
+#define RISCV_DWARF_VLENB	(4096 + 0xc22)
+
 #endif /* ! GCC_RISCV_H */
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/rvv-stack.exp b/gcc/testsuite/gcc.target/riscv/rvv/stack/rvv-stack.exp
new file mode 100644
index 00000000000..9a558f32ed0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/rvv-stack.exp
@@ -0,0 +1,47 @@
+# Copyright (C) 2022-2022 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# GCC testsuite that uses the `dg.exp' driver.
+
+# Exit immediately if this isn't a RISC-V target.
+if ![istarget riscv*-*-*] then {
+  return
+}
+
+# Load support procs.
+load_lib gcc-dg.exp
+
+set gcc_march_list [list "-march=rv64gcv" "-march=rv64gv"]
+if [istarget riscv32-*-*] then {
+  set gcc_march_list [list "-march=rv32gcv" "-march=rv32gv"]
+}
+
+# If a testcase doesn't have special options, use these.
+global DEFAULT_CFLAGS
+if ![info exists DEFAULT_CFLAGS] then {
+    set DEFAULT_CFLAGS " -ansi -pedantic-errors"
+}
+
+# Initialize `dg'.
+dg-init
+
+# Main loop.
+foreach march $gcc_march_list {
+        gcc-dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] \
+                $march $DEFAULT_CFLAGS
+}
+# All done.
+dg-finish
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-scalar.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-scalar.c
new file mode 100644
index 00000000000..58c8e6de603
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-scalar.c
@@ -0,0 +1,53 @@
+
+/* { dg-do compile } */
+/* { dg-options "-fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+void f2 (char*);
+void f3 (char*, ...);
+
+/*
+** stach_check_alloca_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-48
+**	sw	ra,12\(sp\)
+**	sw	s0,8\(sp\)
+**	addi	s0,sp,16
+**	...
+**	addi	a0,a0,23
+**	andi	a0,a0,-16
+**	sub	sp,sp,a0
+**	...
+**	addi	sp,s0,-16
+**	lw	ra,12\(sp\)
+**	lw	s0,8\(sp\)
+**	addi	sp,sp,48
+**	jr	ra
+*/
+void stach_check_alloca_1 (int y, ...)
+{
+  char* pStr = (char*)__builtin_alloca(y);
+  f2(pStr);
+}
+
+/*
+** stach_check_alloca_2: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-48
+**	sw	ra,44\(sp\)
+**	sw	s0,40\(sp\)
+**	addi	s0,sp,48
+**	addi	a0,a0,23
+**	andi	a0,a0,-16
+**	sub	sp,sp,a0
+**	...
+**	addi	sp,s0,-48
+**	lw	ra,44\(sp\)
+**	lw	s0,40\(sp\)
+**	addi	sp,sp,48
+**	jr	ra
+*/
+void stach_check_alloca_2 (int y)
+{
+  char* pStr = (char*)__builtin_alloca(y);
+  f3(pStr, pStr, pStr, pStr, pStr, pStr, pStr, pStr, 2, pStr, pStr, pStr, 1);
+}
+
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-vector.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-vector.c
new file mode 100644
index 00000000000..12c05b337cb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-alloca-vector.c
@@ -0,0 +1,45 @@
+
+/* { dg-do compile } */
+/* { dg-options "-fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include <riscv_vector.h>
+
+void f (char*);
+
+/*
+** stach_check_alloca_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-48
+**	sw	ra,12\(sp\)
+**	sw	s0,8\(sp\)
+**	addi	s0,sp,16
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	addi	a1,a1,23
+**	andi	a1,a1,-16
+**	sub	sp,sp,a1
+**	...
+**	addi	sp,s0,-16
+**	lw	ra,12\(sp\)
+**	lw	s0,8\(sp\)
+**	addi	sp,sp,48
+**	jr	ra
+*/
+void stach_check_alloca_1 (vuint8m1_t data, uint8_t *base, int y, ...)
+{
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m1_t *)base = data;
+  char* pStr = (char*)__builtin_alloca(y);
+  f(pStr);
+}
+
+
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-scalar.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-scalar.c
new file mode 100644
index 00000000000..72179791677
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-scalar.c
@@ -0,0 +1,48 @@
+
+/* { dg-do compile } */
+/* { dg-options "-msave-restore -fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+
+void fn2 (float a1, float a2, float a3, float a4,
+          float a5, float a6, float a7, float a8);
+void fn3 (char*);
+
+
+/*
+** stack_save_restore_1: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	call	t0,__riscv_save_0
+**	addi	sp,sp,-32
+**	fs(w|d)	fs0,24\(sp\)
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	sub	sp,sp,t0
+**	...
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	add	sp,sp,t0
+**	fl(w|d)	fs0,24\(sp\)
+**	addi	sp,sp,32
+**	tail	__riscv_restore_0
+*/
+int stack_save_restore_1 (float a1, float a2, float a3, float a4,
+                      float a5, float a6, float a7, float a8)
+{
+  char d[8000];
+  float f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12, f13;
+  asm volatile ("nop"
+                : "=f" (f1), "=f" (f2), "=f" (f3), "=f" (f4), "=f" (f5), "=f" (f6),
+                  "=f" (f7), "=f" (f8), "=f" (f9),  "=f" (f10), "=f" (f11),
+                  "=f" (f12), "=f" (f13)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "f" (f1), "f" (f2), "f" (f3), "f" (f4), "f" (f5), "f" (f6),
+                  "f" (f7), "f" (f8), "f" (f9), "f" (f10), "f" (f11),
+                  "f" (f12), "f" (f13)
+                :);
+  fn2 (a1, a2, a3, a4, a5, a6, a7, a8);
+  fn3(d);
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-vector.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-vector.c
new file mode 100644
index 00000000000..694ce4669c0
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-save-restore-vector.c
@@ -0,0 +1,62 @@
+
+/* { dg-do compile } */
+/* { dg-options "-msave-restore -fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+#include <riscv_vector.h>
+
+void fn2 (float a1, float a2, float a3, float a4,
+          float a5, float a6, float a7, float a8);
+void fn3 (char*);
+
+/*
+** stack_save_restore_2: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	call	t0,__riscv_save_0
+**	addi	sp,sp,-32
+**	fs(w|d)	fs0,24\(sp\)
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	sub	sp,sp,t0
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	add	sp,sp,t0
+**	fl(w|d)	fs0,24\(sp\)
+**	addi	sp,sp,32
+**	tail	__riscv_restore_0
+*/
+int stack_save_restore_2 (float a1, float a2, float a3, float a4,
+                      float a5, float a6, float a7, float a8,
+                      vuint8m1_t data, uint8_t *base)
+{
+  char d[8000];
+  float f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12, f13;
+  asm volatile ("nop"
+                : "=f" (f1), "=f" (f2), "=f" (f3), "=f" (f4), "=f" (f5), "=f" (f6),
+                  "=f" (f7), "=f" (f8), "=f" (f9),  "=f" (f10), "=f" (f11),
+                  "=f" (f12), "=f" (f13)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "f" (f1), "f" (f2), "f" (f3), "f" (f4), "f" (f5), "f" (f6),
+                  "f" (f7), "f" (f8), "f" (f9), "f" (f10), "f" (f11),
+                  "f" (f12), "f" (f13)
+                :);
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m1_t *)base = data;
+  fn2 (a1, a2, a3, a4, a5, a6, a7, a8);
+  fn3(d);
+  return 0;
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-scalar.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-scalar.c
new file mode 100644
index 00000000000..4400470b650
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-scalar.c
@@ -0,0 +1,205 @@
+/* { dg-do compile } */
+/* { dg-options "-fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include <stdint.h>
+
+void f (uint8_t *);
+
+/* GPR: 16, local: 16, total: 32
+** stack_offset1_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-32
+**	sw	ra,28\(sp\)
+**	addi	a0,sp,4
+**	call	f
+**	lw	ra,28\(sp\)
+**	addi	sp,sp,32
+**	jr	ra
+*/
+void stack_offset1_1 ()
+{
+  uint8_t local[10];
+  f(local);
+}
+
+/* GPR: 16, local: 2016, total: 2032
+** stack_offset1_2: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	mv	a0,sp
+**	call	f
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset1_2 ()
+{
+  uint8_t local[2016];
+  f(local);
+}
+
+/* GPR: 16, local: 6000, total: 6016
+** stack_offset2_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-1920
+**	sw	ra,1916\(sp\)
+**	li	t0,4096
+**	sub	sp,sp,t0
+**	li	a0,-4096
+**	addi	a0,a0,-1904
+**	li	a5,4096
+**	addi	a5,a5,1904
+**	add	a5,a5,a0
+**	add	a0,a5,sp
+**	call	f
+**	li	t0,4096
+**	add	sp,sp,t0
+**	lw	ra,1916\(sp\)
+**	addi	sp,sp,1920
+**	jr	ra
+*/
+void stack_offset2_1 ()
+{
+  uint8_t local[6000];
+  f(local);
+}
+
+/* GPR: 16, local: 2032, total: 2048
+** stack_offset3_1: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-32
+**	sw	ra,28\(sp\)
+**	addi	sp,sp,-2016
+**	addi	a0,sp,12
+**	call	f
+**	addi	sp,sp,2016
+**	lw	ra,28\(sp\)
+**	addi	sp,sp,32
+**	jr	ra
+*/
+/*
+** stack_offset3_1: { target { { any-opts "-march=rv32gv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	addi	sp,sp,-16
+**	addi	a0,sp,12
+**	call	f
+**	addi	sp,sp,16
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset3_1 ()
+{
+  uint8_t local[2017];
+  f(local);
+}
+
+/* GPR: 16, local: 2112, total: 2128
+** stack_offset3_2: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-96
+**	sw	ra,92\(sp\)
+**	addi	sp,sp,-2032
+**	li	a0,-4096
+**	addi	a0,a0,1996
+**	li	a5,4096
+**	addi	a5,a5,-1984
+**	add	a5,a5,a0
+**	add	a0,a5,sp
+**	call	f
+**	addi	sp,sp,2032
+**	lw	ra,92\(sp\)
+**	addi	sp,sp,96
+**	jr	ra
+*/
+/*
+** stack_offset3_2: { target { { any-opts "-march=rv32gv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	addi	sp,sp,-96
+**	li	a0,-4096
+**	addi	a0,a0,1996
+**	li	a5,4096
+**	addi	a5,a5,-1984
+**	add	a5,a5,a0
+**	add	a0,a5,sp
+**	call	f
+**	addi	sp,sp,96
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset3_2 ()
+{
+  uint8_t local[2100];
+  f(local);
+}
+
+/* GPR: 16, local: 8000, total: 8016
+** stack_offset4_1: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-32
+**	sw	ra,28\(sp\)
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	sub	sp,sp,t0
+**	li	a0,-8192
+**	addi	a0,a0,192
+**	li	a5,8192
+**	addi	a5,a5,-192
+**	add	a5,a5,a0
+**	add	a0,a5,sp
+**	call	f
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	add	sp,sp,t0
+**	lw	ra,28\(sp\)
+**	addi	sp,sp,32
+**	jr	ra
+*/
+/*
+** stack_offset4_1: { target { { any-opts "-march=rv32gv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	li	t0,4096
+**	addi	t0,t0,1888
+**	sub	sp,sp,t0
+**	li	a0,-8192
+**	addi	a0,a0,192
+**	li	a5,8192
+**	addi	a5,a5,-192
+**	add	a5,a5,a0
+**	add	a0,a5,sp
+**	call	f
+**	li	t0,4096
+**	addi	t0,t0,1888
+**	add	sp,sp,t0
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset4_1 ()
+{
+  uint8_t local[8000];
+  f(local);
+}
+
+/* GPR: 16, local: 3056, total: 3072
+** stack_offset5_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	addi	sp,sp,-1040
+**	li	a0,-4096
+**	addi	a0,a0,1048
+**	li	a5,4096
+**	addi	a5,a5,-1040
+**	add	a5,a5,a0
+**	add	a0,a5,sp
+**	call	f
+**	addi	sp,sp,1040
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset5_1 ()
+{
+  uint8_t local[3048];
+  f(local);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vararg-scalar.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vararg-scalar.c
new file mode 100644
index 00000000000..ffc90a02f65
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vararg-scalar.c
@@ -0,0 +1,33 @@
+/* { dg-do run } */
+/* { dg-options "-std=c11" } */
+
+#include <stdlib.h>
+#include <stdarg.h>
+
+int va_sum(int a, int count, ...)
+{
+  va_list ap;
+  va_start(ap, count);
+  for (int i = count; i > 0; i--)
+  {
+    int arg = va_arg(ap, int);
+    a = a + arg;
+  }
+  va_end(ap);
+  return a;
+}
+
+int main()
+{
+  int sum = 0;
+  int a = 1;
+  int b = 2;
+  int c = 3;
+  int d = 4;
+  sum = va_sum(sum, 4, a, b, c, d);
+
+  if (sum != 10)
+    {
+      abort();
+    }
+}
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_1.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_1.c
new file mode 100644
index 00000000000..afca87532ae
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_1.c
@@ -0,0 +1,277 @@
+/* { dg-do compile } */
+/* { dg-options "-fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include <stdint.h>
+#include <riscv_vector.h>
+
+void f (uint8_t *);
+void f2 (vuint8m1_t);
+
+/* GPR: 16, local: 16+vlenb, total: 32+vlenb
+** stack_offset1_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-32
+**	sw	ra,28\(sp\)
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	lw	ra,28\(sp\)
+**	addi	sp,sp,32
+**	jr	ra
+*/
+void stack_offset1_1 (vuint8m1_t data)
+{
+  uint8_t local[10];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
+
+/* GPR: 16, local: 2016+vlenb, total: 2032+vlenb
+** stack_offset1_2: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset1_2 (vuint8m1_t data)
+{
+  uint8_t local[2016];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
+
+/* GPR: 16, local: 6000+vlenb, total: 6016+vlenb
+** stack_offset2_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-1920
+**	sw	ra,1916\(sp\)
+**	li	t0,4096
+**	sub	sp,sp,t0
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	li	t0,4096
+**	add	sp,sp,t0
+**	lw	ra,1916\(sp\)
+**	addi	sp,sp,1920
+**	jr	ra
+*/
+void stack_offset2_1 (vuint8m1_t data)
+{
+  uint8_t local[6000];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
+
+/* GPR: 16, local: 2032+vlenb, total: 2048+vlenb
+** stack_offset3_1: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-32
+**	sw	ra,28\(sp\)
+**	addi	sp,sp,-2016
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	addi	sp,sp,2016
+**	lw	ra,28\(sp\)
+**	addi	sp,sp,32
+**	jr	ra
+*/
+/*
+** stack_offset3_1: { target { { any-opts "-march=rv32gv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	addi	sp,sp,-16
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	addi	sp,sp,16
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset3_1 (vuint8m1_t data)
+{
+  uint8_t local[2017];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
+
+/* GPR: 16, local: 2112+vlenb, total: 2128+vlenb
+** stack_offset3_2: { target { { any-opts "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-96
+**	sw	ra,92\(sp\)
+**	addi	sp,sp,-2032
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	addi	sp,sp,2032
+**	lw	ra,92\(sp\)
+**	addi	sp,sp,96
+**	jr	ra
+*/
+/*
+** stack_offset3_2: { target { { any-opts "-march=rv32gv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	addi	sp,sp,-96
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	addi	sp,sp,96
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset3_2 (vuint8m1_t data)
+{
+  uint8_t local[2100];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
+
+/* GPR: 16, local: 8000+vlenb, total: 8016+vlenb
+** stack_offset4_1: { target { { any-opts  "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-32
+**	sw	ra,28\(sp\)
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	sub	sp,sp,t0
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	li	t0,8192
+**	addi	t0,t0,-208
+**	add	sp,sp,t0
+**	lw	ra,28\(sp\)
+**	addi	sp,sp,32
+**	jr	ra
+*/
+/*
+** stack_offset4_1: { target { { any-opts "-march=rv32gv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	li	t0,4096
+**	addi	t0,t0,1888
+**	sub	sp,sp,t0
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	li	t0,4096
+**	addi	t0,t0,1888
+**	add	sp,sp,t0
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset4_1 (vuint8m1_t data)
+{
+  uint8_t local[8000];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
+
+/* GPR: 16, local: 3056+vlenb, total: 3072+vlenb
+** stack_offset5_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	addi	sp,sp,-2032
+**	sw	ra,2028\(sp\)
+**	addi	sp,sp,-1040
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**	...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	addi	sp,sp,1040
+**	lw	ra,2028\(sp\)
+**	addi	sp,sp,2032
+**	jr	ra
+*/
+void stack_offset5_1 (vuint8m1_t data)
+{
+  uint8_t local[3048];
+  f(local);
+  vint16m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  f2(data);
+}
diff --git a/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_2.c b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_2.c
new file mode 100644
index 00000000000..7938fa6261c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/rvv/stack/stack-check-vector_2.c
@@ -0,0 +1,141 @@
+/* { dg-do compile } */
+/* { dg-options "-fno-schedule-insns -fno-schedule-insns2" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+#include <stdint.h>
+#include <riscv_vector.h>
+
+void f (uint8_t *);
+void f2 (vuint8m1_t);
+
+/* 1*vlenb
+** stack_offset1_1: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	csrr	t0,vlenb
+**	sub	sp,sp,t0
+**      ...
+**	csrr	t0,vlenb
+**	add	sp,sp,t0
+**	jr	ra
+*/
+void stack_offset1_1 (vuint8m1_t data, uint8_t *base)
+{
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m1_t *)base = data;
+}
+
+/* 8*vlenb
+** stack_offset1_2: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	csrr	t0,vlenb
+**	slli	t1,t0,3
+**	sub	sp,sp,t1
+**      ...
+**	csrr	t0,vlenb
+**	slli	t1,t0,3
+**	add	sp,sp,t1
+**	jr	ra
+*/
+void stack_offset1_2 (vuint8m8_t data, uint8_t *base)
+{
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m8_t *)base = data;
+}
+
+/* 3*vlenb
+** stack_offset1_3: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	csrr	t0,vlenb
+**	slli	t1,t0,2
+**	sub	t1,t1,t0
+**	sub	sp,sp,t1
+**      ...
+**	csrr	t0,vlenb
+**	slli	t1,t0,2
+**	sub	t1,t1,t0
+**	add	sp,sp,t1
+**	jr	ra
+*/
+void stack_offset1_3 (vuint8m1_t data, vuint8m2_t data2, uint8_t *base)
+{
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m2_t *)base = data2; 
+  *(vuint8m1_t *)base = data;
+}
+
+/* 9*vlenb
+** stack_offset1_4: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	csrr	t0,vlenb
+**	slli	t1,t0,3
+**	add	t1,t1,t0
+**	sub	sp,sp,t1
+**      ...
+**	csrr	t0,vlenb
+**	slli	t1,t0,3
+**	add	t1,t1,t0
+**	add	sp,sp,t1
+**	jr	ra
+*/
+void stack_offset1_4 (vuint8m1_t data, vuint8m8_t data2, uint8_t *base)
+{
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m8_t *)base = data2; 
+  *(vuint8m1_t *)base = data;
+}
+
+/* 10*vlenb
+** stack_offset1_5: { target { { any-opts "-march=rv32gv" "-march=rv32gcv" } && { { any-opts "-O1" "-O2" "-O3" "-Os" } && { no-opts "-flto" } } } }
+**	csrr	t0,vlenb
+**	li	t1,10
+**	mul	t1,t1,t0
+**	sub	sp,sp,t1
+**      ...
+**	csrr	t0,vlenb
+**	li	t1,10
+**	mul	t1,t1,t0
+**	add	sp,sp,t1
+**	jr	ra
+*/
+void stack_offset1_5 (vuint8m2_t data, vuint8m8_t data2, uint8_t *base)
+{
+  vuint8m8_t v0, v8, v16, v24;
+  asm volatile ("nop"
+                : "=vr" (v0), "=vr" (v8), "=vr" (v16), "=vr" (v24)
+                :
+                :);
+  asm volatile ("nop"
+                :
+                : "vr" (v0), "vr" (v8), "vr" (v16), "vr" (v24)
+                :);
+  *(vuint8m8_t *)base = data2; 
+  *(vuint8m2_t *)base = data;
+}
-- 
2.36.1




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 15/21]   Add integer intrinsics
  2022-05-31  8:49 [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** juzhe.zhong
                   ` (11 preceding siblings ...)
  2022-05-31  8:50 ` [PATCH 13/21] Adjust scalable frame and full testcases juzhe.zhong
@ 2022-05-31  8:50 ` juzhe.zhong
  2022-05-31  8:50 ` [PATCH 18/21] Add rest intrinsic support juzhe.zhong
  2022-05-31 16:51 ` [PATCH 00/21] *** Add RVV (RISC-V 'V' Extension) support *** Palmer Dabbelt
  14 siblings, 0 replies; 16+ messages in thread
From: juzhe.zhong @ 2022-05-31  8:50 UTC (permalink / raw)
  To: gcc-patches; +Cc: kito.cheng, palmer, juzhe.zhong

From: zhongjuzhe <juzhe.zhong@rivai.ai>

gcc/ChangeLog:

        * config/riscv/riscv-protos.h: New functions.
        * config/riscv/riscv-vector-builtins-functions.cc (basic_alu::assemble_name): New function.
        (basic_alu::get_return_type): New function.
        (unop::get_argument_types): New function.
        (unop::can_be_overloaded_p): New function.
        (binop::get_argument_types): New function.
        (binop::can_be_overloaded_p): New function.
        (wbinop::assemble_name): New function.
        (ternop::get_argument_types): New function.
        (ternop::can_be_overloaded_p): New function.
        (vadd::expand): New function.
        (vsub::expand): New function.
        (vrsub::expand): New function.
        (vneg::expand): New function.
        (vwadd::expand): New function.
        (vwsub::expand): New function.
        (vwaddu::expand): New function.
        (vwsubu::expand): New function.
        (vwcvt::expand): New function.
        (vwcvtu::expand): New function.
        (vsext::assemble_name): New function.
        (vsext::expand): New function.
        (vzext::assemble_name): New function.
        (vzext::expand): New function.
        (vadc::expand): New function.
        (vsbc::expand): New function.
        (vmadc::assemble_name): New function.
        (vmadc::expand): New function.
        (vmsbc::assemble_name): New function.
        (vmsbc::expand): New function.
        (vand::expand): New function.
        (vor::expand): New function.
        (vxor::expand): New function.
        (vnot::expand): New function.
        (vshift::get_argument_types): New function.
        (vsll::expand): New function.
        (vsrl::expand): New function.
        (vsra::expand): New function.
        (vnsrl::expand): New function.
        (vnsra::expand): New function.
        (vncvt::expand): New function.
        (vcmp::assemble_name): New function.
        (vmseq::expand): New function.
        (vmsne::expand): New function.
        (vmslt::expand): New function.
        (vmsltu::expand): New function.
        (vmsgt::expand): New function.
        (vmsgtu::expand): New function.
        (vmsle::expand): New function.
        (vmsleu::expand): New function.
        (vmsge::expand): New function.
        (vmsgeu::expand): New function.
        (vmin::expand): New function.
        (vminu::expand): New function.
        (vmax::expand): New function.
        (vmaxu::expand): New function.
        (vmul::expand): New function.
        (vmulh::expand): New function.
        (vmulhu::expand): New function.
        (vmulhsu::expand): New function.
        (vdiv::expand): New function.
        (vdivu::expand): New function.
        (vrem::expand): New function.
        (vremu::expand): New function.
        (vwmul::expand): New function.
        (vwmulu::expand): New function.
        (vwmulsu::expand): New function.
        (vmacc::expand): New function.
        (vnmsac::expand): New function.
        (vmadd::expand): New function.
        (vnmsub::expand): New function.
        (vwmacc::expand): New function.
        (vwmaccu::expand): New function.
        (vwmaccsu::expand): New function.
        (vwmaccus::expand): New function.
        (vmerge::get_position_of_dest_arg): New function.
        (vmerge::expand): New function.
        (vmv::get_argument_types): New function.
        (vmv::can_be_overloaded_p): New function.
        (vmv::expand): New function.
        * config/riscv/riscv-vector-builtins-functions.def (vadd): New macro define.
        (vsub): New macro define.
        (vrsub): New macro define.
        (vneg): New macro define.
        (vwadd): New macro define.
        (vwsub): New macro define.
        (vwaddu): New macro define.
        (vwsubu): New macro define.
        (vwcvt): New macro define.
        (vwcvtu): New macro define.
        (vsext): New macro define.
        (vzext): New macro define.
        (vadc): New macro define.
        (vsbc): New macro define.
        (vmadc): New macro define.
        (vmsbc): New macro define.
        (vand): New macro define.
        (vor): New macro define.
        (vxor): New macro define.
        (vnot): New macro define.
        (vsll): New macro define.
        (vsrl): New macro define.
        (vsra): New macro define.
        (vnsrl): New macro define.
        (vnsra): New macro define.
        (vncvt): New macro define.
        (vmseq): New macro define.
        (vmsne): New macro define.
        (vmslt): New macro define.
        (vmsltu): New macro define.
        (vmsgt): New macro define.
        (vmsgtu): New macro define.
        (vmsle): New macro define.
        (vmsleu): New macro define.
        (vmsge): New macro define.
        (vmsgeu): New macro define.
        (vmin): New macro define.
        (vminu): New macro define.
        (vmax): New macro define.
        (vmaxu): New macro define.
        (vmul): New macro define.
        (vmulh): New macro define.
        (vmulhu): New macro define.
        (vmulhsu): New macro define.
        (vdiv): New macro define.
        (vdivu): New macro define.
        (vrem): New macro define.
        (vremu): New macro define.
        (vwmul): New macro define.
        (vwmulu): New macro define.
        (vwmulsu): New macro define.
        (vmacc): New macro define.
        (vnmsac): New macro define.
        (vmadd): New macro define.
        (vnmsub): New macro define.
        (vwmacc): New macro define.
        (vwmaccu): New macro define.
        (vwmaccsu): New macro define.
        (vwmaccus): New macro define.
        (vmerge): New macro define.
        (vmv): New macro define.
        * config/riscv/riscv-vector-builtins-functions.h (class basic_alu): New class.
        (class unop): New class.
        (class binop): New class.
        (class wbinop): New class.
        (class ternop): New class.
        (class vadd): New class.
        (class vsub): New class.
        (class vrsub): New class.
        (class vneg): New class.
        (class vwadd): New class.
        (class vwsub): New class.
        (class vwaddu): New class.
        (class vwsubu): New class.
        (class vwcvt): New class.
        (class vwcvtu): New class.
        (class vsext): New class.
        (class vzext): New class.
        (class vadc): New class.
        (class vsbc): New class.
        (class vmadc): New class.
        (class vmsbc): New class.
        (class vand): New class.
        (class vor): New class.
        (class vxor): New class.
        (class vnot): New class.
        (class vshift): New class.
        (class vsll): New class.
        (class vsrl): New class.
        (class vsra): New class.
        (class vnsrl): New class.
        (class vnsra): New class.
        (class vncvt): New class.
        (class vcmp): New class.
        (class vmseq): New class.
        (class vmsne): New class.
        (class vmslt): New class.
        (class vmsltu): New class.
        (class vmsgt): New class.
        (class vmsgtu): New class.
        (class vmsle): New class.
        (class vmsleu): New class.
        (class vmsge): New class.
        (class vmsgeu): New class.
        (class vmin): New class.
        (class vminu): New class.
        (class vmax): New class.
        (class vmaxu): New class.
        (class vmul): New class.
        (class vmulh): New class.
        (class vmulhu): New class.
        (class vmulhsu): New class.
        (class vdiv): New class.
        (class vdivu): New class.
        (class vrem): New class.
        (class vremu): New class.
        (class vwmul): New class.
        (class vwmulu): New class.
        (class vwmulsu): New class.
        (class vmacc): New class.
        (class vnmsac): New class.
        (class vmadd): New class.
        (class vnmsub): New class.
        (class vwmacc): New class.
        (class vwmaccu): New class.
        (class vwmaccsu): New class.
        (class vwmaccus): New class.
        (class vmerge): New class.
        (class vmv): New class.
        * config/riscv/riscv-vector.cc (modify_operands): Fix for UNSPEC_VSSUBU.
        (emit_op6): New function.
        (emit_op7): New function.
        * config/riscv/riscv.cc (riscv_print_operand): Add %v and %V assembly support.
        (riscv_modes_tieable_p): Adjust for RVV modes.
        * config/riscv/riscv.md: Add more code support.
        * config/riscv/vector-iterators.md: Fix iterators.
        * config/riscv/vector.md (@v<vxoptab><mode>_vx): New pattern.
        (@v<vxoptab><mode>_vxm): New pattern.
        (@vms<optab><mode>_vx): New pattern.
        (@vadd<mode>_vv): New pattern.
        (@vsub<mode>_vv): New pattern.
        (@vadd<mode>_vx_internal): New pattern.
        (@vadd<mode>_vx_32bit): New pattern.
        (@vsub<mode>_vx_internal): New pattern.
        (@vsub<mode>_vx_32bit): New pattern.
        (@vrsub<mode>_vx_internal): New pattern.
        (@vrsub<mode>_vx_32bit): New pattern.
        (@vneg<mode>_v): New pattern.
        (@vw<plus_minus:optab><u><vw>_vv): New pattern.
        (@vw<plus_minus:optab><u><vw>_vx): New pattern.
        (@vw<plus_minus:optab><u><vw>_wv): New pattern.
        (@vw<plus_minus:optab><u><vw>_wx): New pattern.
        (@vwcvt<u><vw>_x_x_v): New pattern.
        (@v<sz>ext<vw>_vf2): New pattern.
        (@v<sz>ext<vqw>_vf4): New pattern.
        (@v<sz>ext<vow>_vf8): New pattern.
        (@vadc<mode>_vvm): New pattern.
        (@vsbc<mode>_vvm): New pattern.
        (@vadc<mode>_vxm_internal): New pattern.
        (@vadc<mode>_vxm_32bit): New pattern.
        (@vsbc<mode>_vxm_internal): New pattern.
        (@vsbc<mode>_vxm_32bit): New pattern.
        (@vmadc<mode>_vvm): New pattern.
        (@vmsbc<mode>_vvm): New pattern.
        (@vmadc<mode>_vxm_internal): New pattern.
        (@vmadc<mode>_vxm_32bit): New pattern.
        (@vmsbc<mode>_vxm_internal): New pattern.
        (@vmsbc<mode>_vxm_32bit): New pattern.
        (@vmadc<mode>_vv): New pattern.
        (@vmsbc<mode>_vv): New pattern.
        (@vmadc<mode>_vx_internal): New pattern.
        (@vmadc<mode>_vx_32bit): New pattern.
        (@vmsbc<mode>_vx_internal): New pattern.
        (@vmsbc<mode>_vx_32bit): New pattern.
        (@v<optab><mode>_vv): New pattern.
        (@v<optab><mode>_vx_internal): New pattern.
        (@v<optab><mode>_vx_32bit): New pattern.
        (@vnot<mode>_v): New pattern.
        (@v<optab><mode>_vx): New pattern.
        (@vn<optab><mode>_wv): New pattern.
        (@vn<optab><mode>_wx): New pattern.
        (@vncvt<mode>_x_x_w): New pattern.
        (@vms<optab><mode>_vv): New pattern.
        (@vms<optab><mode>_vx_internal): New pattern.
        (@vms<optab><mode>_vx_32bit): New pattern.
        (*vms<optab><mode>_vx): New pattern.
        (*vms<optab><mode>_vx_32bit): New pattern.
        (@vmul<mode>_vv): New pattern.
        (@vmul<mode>_vx_internal): New pattern.
        (@vmul<mode>_vx_32bit): New pattern.
        (@vmulh<u><mode>_vv): New pattern.
        (@vmulh<u><mode>_vx_internal): New pattern.
        (@vmulh<u><mode>_vx_32bit): New pattern.
        (@vmulhsu<mode>_vv): New pattern.
        (@vmulhsu<mode>_vx_internal): New pattern.
        (@vmulhsu<mode>_vx_32bit): New pattern.
        (@vwmul<u><vw>_vv): New pattern.
        (@vwmul<u><vw>_vx): New pattern.
        (@vwmulsu<vw>_vv): New pattern.
        (@vwmulsu<vw>_vx): New pattern.
        (@v<imac><mode>_vv): New pattern.
        (@v<imac><mode>_vx_internal): New pattern.
        (@v<imac><mode>_vx_32bit): New pattern.
        (@vwmacc<u><vw>_vv): New pattern.
        (@vwmacc<u><vw>_vx): New pattern.
        (@vwmaccsu<vw>_vv): New pattern.
        (@vwmaccsu<vw>_vx): New pattern.
        (@vwmaccus<vw>_vx): New pattern.
        (@vmerge<mode>_vvm): New pattern.
        (@vmerge<mode>_vxm_internal): New pattern.
        (@vmerge<mode>_vxm_32bit): New pattern.
        (@vmv<mode>_v_v): New pattern.

---
 gcc/config/riscv/riscv-protos.h               |   24 +
 .../riscv/riscv-vector-builtins-functions.cc  |  991 +++++++
 .../riscv/riscv-vector-builtins-functions.def |   98 +
 .../riscv/riscv-vector-builtins-functions.h   |  701 +++++
 gcc/config/riscv/riscv-vector.cc              |   41 +-
 gcc/config/riscv/riscv.cc                     |   53 +
 gcc/config/riscv/riscv.md                     |   62 +-
 gcc/config/riscv/vector-iterators.md          |    3 -
 gcc/config/riscv/vector.md                    | 2575 ++++++++++++++++-
 9 files changed, 4502 insertions(+), 46 deletions(-)

diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h
index fd8906e47de..c47ab8e079d 100644
--- a/gcc/config/riscv/riscv-protos.h
+++ b/gcc/config/riscv/riscv-protos.h
@@ -145,6 +145,30 @@ emit_op5 (
   bool (*imm_p) (rtx),
   int i, bool reverse
 );
+extern void
+emit_op6 (
+  unsigned int unspec,
+  machine_mode Vmode, machine_mode VSImode, machine_mode VMSImode,
+  machine_mode VSUBmode,
+  rtx *operands,
+  rtx (*gen_vx) (rtx, rtx, rtx, rtx, rtx, rtx),
+  rtx (*gen_vx_32bit) (rtx, rtx, rtx, rtx, rtx, rtx),
+  rtx (*gen_vv) (rtx, rtx, rtx, rtx, rtx, rtx),
+  bool (*imm_p) (rtx),
+  int i, bool reverse
+);
+extern void
+emit_op7 (
+  unsigned int unspec,
+  machine_mode Vmode, machine_mode VSImode, machine_mode VMSImode,
+  machine_mode VSUBmode,
+  rtx *operands,
+  rtx (*gen_vx) (rtx, rtx, rtx, rtx, rtx, rtx, rtx),
+  rtx (*gen_vx_32bit) (rtx, rtx, rtx, rtx, rtx, rtx, rtx),
+  rtx (*gen_vv) (rtx, rtx, rtx, rtx, rtx, rtx, rtx),
+  bool (*imm_p) (rtx),
+  int i, bool reverse
+);
 
 /* We classify builtin types into two classes:
    1. General builtin class which is using the
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc
index 0726465f146..6e0fd0b3570 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.cc
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc
@@ -2072,6 +2072,997 @@ vleff::expand (const function_instance &instance, tree exp, rtx target) const
   return expand_builtin_insn (icode, exp, target, instance);
 }
 
+/* A function implementation for basic_alu functions.  */
+char *
+basic_alu::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0);
+  if (this->can_be_overloaded_p (instance))
+    {
+      append_name (instance.get_base_name ());
+      if (instance.get_operation () == OP_v_x ||
+          instance.get_operation () == OP_v_v ||
+          instance.get_operation () == OP_v_f)
+        append_name ("_v");
+      if (instance.get_operation () == OP_x_x_v ||
+          instance.get_operation () == OP_x_x_w)
+        append_name ("_x");
+      append_name (get_pred_str (instance.get_pred (), true));
+      return finish_name ();
+    }
+  return nullptr;
+}
+
+tree
+basic_alu::get_return_type (const function_instance &instance) const
+{
+  return get_dt_t_with_index (instance, 0);
+}
+
+/* A function implementation for unary functions.  */
+void
+unop::get_argument_types (const function_instance &instance,
+                          vec<tree> &argument_types) const
+{
+  argument_types.quick_push (get_dt_t_with_index (instance, 1));
+}
+
+bool
+unop::can_be_overloaded_p (const function_instance &instance) const
+{
+  if (instance.get_pred () == PRED_none)
+    return false;
+    
+  return true;
+}
+
+/* A function implementation for binary functions.  */
+void
+binop::get_argument_types (const function_instance &instance,
+                           vec<tree> &argument_types) const
+{
+  for (unsigned int i = 1; i < instance.get_arg_pattern ().arg_len; i++)
+    {
+      if (i == 2 && vector_scalar_operation_p (instance.get_operation ()))
+        {
+          machine_mode mode = GET_MODE_INNER (instance.get_arg_pattern ().arg_list[i]);
+          bool unsigned_p = is_dt_unsigned (instance.get_data_type_list ()[i]);
+          argument_types.quick_push (get_dt_t (mode, unsigned_p));
+        }
+      else
+        argument_types.quick_push (get_dt_t_with_index (instance, i));
+    }
+}
+
+bool
+binop::can_be_overloaded_p (const function_instance &) const
+{
+  return true;
+}
+
+/* A function implementation for widen binary functions.  */
+char *
+wbinop::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0);
+  append_name (instance.get_base_name ());
+  append_name (get_operation_str (instance.get_operation ()));
+  append_name (get_pred_str (instance.get_pred (), true));
+  return finish_name ();
+}
+
+/* A function implementation for ternary functions.  */
+void
+ternop::get_argument_types (const function_instance &instance,
+                            vec<tree> &argument_types) const
+{
+  if (vector_scalar_operation_p (instance.get_operation ()))
+    {
+      machine_mode mode = GET_MODE_INNER (instance.get_arg_pattern ().arg_list[1]);
+      bool unsigned_p = is_dt_unsigned (instance.get_data_type_list ()[1]);
+      argument_types.quick_push (get_dt_t (mode, unsigned_p));
+    }
+  else
+    argument_types.quick_push (get_dt_t_with_index (instance, 1));
+  for (unsigned int i = 2; i < instance.get_arg_pattern ().arg_len; i++)
+    argument_types.quick_push (get_dt_t_with_index (instance, i));
+}
+
+bool
+ternop::can_be_overloaded_p (const function_instance &) const
+{
+  return true;
+}
+
+/* A function implementation for vadd functions.  */
+rtx
+vadd::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vadd_vv (mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VADD, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+rtx
+vsub::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vsub_vv (mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VSUB, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vrsub functions.  */
+rtx
+vrsub::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode = code_for_v_vx (UNSPEC_VRSUB, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vneg functions.  */
+rtx
+vneg::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode = code_for_vneg_v (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwadd functions.  */
+rtx
+vwadd::expand (const function_instance &instance, tree exp,
+                     rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vw_vv (PLUS, SIGN_EXTEND, mode);
+  else if (instance.get_operation () == OP_vx)
+    icode = code_for_vw_vx (PLUS, SIGN_EXTEND, mode);
+  else if (instance.get_operation () == OP_wv)
+    icode = code_for_vw_wv (PLUS, SIGN_EXTEND, mode);
+  else
+    icode = code_for_vw_wx (PLUS, SIGN_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwsub functions.  */
+rtx
+vwsub::expand (const function_instance &instance, tree exp,
+                     rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vw_vv (MINUS, SIGN_EXTEND, mode);
+  else if (instance.get_operation () == OP_vx)
+    icode = code_for_vw_vx (MINUS, SIGN_EXTEND, mode);
+  else if (instance.get_operation () == OP_wv)
+    icode = code_for_vw_wv (MINUS, SIGN_EXTEND, mode);
+  else
+    icode = code_for_vw_wx (MINUS, SIGN_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwaddu functions.  */
+rtx
+vwaddu::expand (const function_instance &instance, tree exp,
+                     rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vw_vv (PLUS, ZERO_EXTEND, mode);
+  else if (instance.get_operation () == OP_vx)
+    icode = code_for_vw_vx (PLUS, ZERO_EXTEND, mode);
+  else if (instance.get_operation () == OP_wv)
+    icode = code_for_vw_wv (PLUS, ZERO_EXTEND, mode);
+  else
+    icode = code_for_vw_wx (PLUS, ZERO_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwsubu functions.  */
+rtx
+vwsubu::expand (const function_instance &instance, tree exp,
+                     rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vw_vv (MINUS, ZERO_EXTEND, mode);
+  else if (instance.get_operation () == OP_vx)
+    icode = code_for_vw_vx (MINUS, ZERO_EXTEND, mode);
+  else if (instance.get_operation () == OP_wv)
+    icode = code_for_vw_wv (MINUS, ZERO_EXTEND, mode);
+  else
+    icode = code_for_vw_wx (MINUS, ZERO_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwcvt functions.  */
+rtx
+vwcvt::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode = code_for_vwcvt_x_x_v (SIGN_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwcvtu functions.  */
+rtx
+vwcvtu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode = code_for_vwcvt_x_x_v (ZERO_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsext functions.  */
+char *
+vsext::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0);
+  append_name (instance.get_base_name ());
+  append_name (get_operation_str (instance.get_operation ()));
+  append_name (get_pred_str (instance.get_pred (), true));
+  return finish_name ();
+}
+
+rtx
+vsext::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vf2)
+    icode = code_for_vext_vf2 (SIGN_EXTEND, mode);
+  else if (instance.get_operation () == OP_vf4)
+    icode = code_for_vext_vf4 (SIGN_EXTEND, mode);
+  else
+    icode = code_for_vext_vf8 (SIGN_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vzext functions.  */
+char *
+vzext::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0);
+  append_name (instance.get_base_name ());
+  append_name (get_operation_str (instance.get_operation ()));
+  append_name (get_pred_str (instance.get_pred (), true));
+  return finish_name ();
+}
+
+rtx
+vzext::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vf2)
+    icode = code_for_vext_vf2 (ZERO_EXTEND, mode);
+  else if (instance.get_operation () == OP_vf4)
+    icode = code_for_vext_vf4 (ZERO_EXTEND, mode);
+  else
+    icode = code_for_vext_vf8 (ZERO_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vadc functions.  */
+rtx
+vadc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vvm)
+    icode = code_for_vadc_vvm (mode);
+  else
+    icode = code_for_v_vxm (UNSPEC_VADC, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsbc functions.  */
+rtx
+vsbc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vvm)
+    icode = code_for_vsbc_vvm (mode);
+  else
+    icode = code_for_v_vxm (UNSPEC_VSBC, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmadc functions.  */
+char *
+vmadc::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  return finish_name ();
+}
+
+rtx
+vmadc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vvm)
+    icode = code_for_vmadc_vvm (mode);
+  else if (instance.get_operation () == OP_vv)
+    icode = code_for_vmadc_vv (mode);
+  else if (instance.get_operation () == OP_vxm)
+    icode = code_for_v_vxm (UNSPEC_VMADC, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMADC, mode);
+
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsbc functions.  */
+char *
+vmsbc::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  return finish_name ();
+}
+
+rtx
+vmsbc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vvm)
+    icode = code_for_vmsbc_vvm (mode);
+  else if (instance.get_operation () == OP_vv)
+    icode = code_for_vmsbc_vv (mode);
+  else if (instance.get_operation () == OP_vxm)
+    icode = code_for_v_vxm (UNSPEC_VMSBC, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMSBC, mode);
+
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vand functions.  */
+rtx
+vand::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (AND, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VAND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vor functions.  */
+rtx
+vor::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (IOR, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VIOX, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vxor functions.  */
+rtx
+vxor::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (XOR, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VXOR, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vnot functions.  */
+rtx
+vnot::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+
+  enum insn_code icode = code_for_vnot_v (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vshift functions.  */
+void
+vshift::get_argument_types (const function_instance &instance,
+                            vec<tree> &argument_types) const
+{
+  argument_types.quick_push (get_dt_t_with_index (instance, 1));
+  if (instance.get_operation () == OP_vx || instance.get_operation () == OP_wx)
+    argument_types.quick_push (size_type_node);
+  else
+    argument_types.quick_push (get_dt_t_with_index (instance, 2));
+}
+
+/* A function implementation for vsll functions.  */
+rtx
+vsll::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (ASHIFT, mode);
+  else
+    icode = code_for_v_vx (ASHIFT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsrl functions.  */
+rtx
+vsrl::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (LSHIFTRT, mode);
+  else
+    icode = code_for_v_vx (LSHIFTRT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vsra functions.  */
+rtx
+vsra::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (ASHIFTRT, mode);
+  else
+    icode = code_for_v_vx (ASHIFTRT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vnsrl functions.  */
+rtx
+vnsrl::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_wv)
+    icode = code_for_vn_wv (LSHIFTRT, mode);
+  else
+    icode = code_for_vn_wx (LSHIFTRT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vnsra functions.  */
+rtx
+vnsra::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[0];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_wv)
+    icode = code_for_vn_wv (ASHIFTRT, mode);
+  else
+    icode = code_for_vn_wx (ASHIFTRT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vncvt functions.  */
+rtx
+vncvt::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode = code_for_vncvt_x_x_w (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vcmp functions.  */
+char *
+vcmp::assemble_name (function_instance &instance)
+{
+  intrinsic_rename (instance, 0, 1);
+  append_name (instance.get_base_name ());
+  append_name (get_pred_str (instance.get_pred (), true));
+  return finish_name ();
+}
+
+/* A function implementation for vmseq functions.  */
+rtx
+vmseq::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (EQ, mode);
+  else
+    icode = code_for_vms_vx (EQ, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsne functions.  */
+rtx
+vmsne::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (NE, mode);
+  else
+    icode = code_for_vms_vx (NE, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmslt functions.  */
+rtx
+vmslt::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (LT, mode);
+  else
+    icode = code_for_vms_vx (LT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsltu functions.  */
+rtx
+vmsltu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (LTU, mode);
+  else
+    icode = code_for_vms_vx (LTU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsgt functions.  */
+rtx
+vmsgt::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (GT, mode);
+  else
+    icode = code_for_vms_vx (GT, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsgtu functions.  */
+rtx
+vmsgtu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (GTU, mode);
+  else
+    icode = code_for_vms_vx (GTU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsle functions.  */
+rtx
+vmsle::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (LE, mode);
+  else
+    icode = code_for_vms_vx (LE, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsleu functions.  */
+rtx
+vmsleu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (LEU, mode);
+  else
+    icode = code_for_vms_vx (LEU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsge functions.  */
+rtx
+vmsge::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (GE, mode);
+  else
+    icode = code_for_vms_vx (GE, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmsgeu functions.  */
+rtx
+vmsgeu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vms_vv (GEU, mode);
+  else
+    icode = code_for_vms_vx (GEU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmin functions.  */
+rtx
+vmin::expand (const function_instance &instance, tree exp,
+                   rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (SMIN, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMIN, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vminu functions.  */
+rtx
+vminu::expand (const function_instance &instance, tree exp,
+                   rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UMIN, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMINU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmax functions.  */
+rtx
+vmax::expand (const function_instance &instance, tree exp,
+                   rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (SMAX, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMAX, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmaxu functions.  */
+rtx
+vmaxu::expand (const function_instance &instance, tree exp,
+                   rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UMAX, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMAXU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmul functions.  */
+rtx
+vmul::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vmul_vv (mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMUL, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmulh functions.  */
+rtx
+vmulh::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vmulh_vv (UNSPEC_VMULH, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMULH, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmulhu functions.  */
+rtx
+vmulhu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vmulh_vv (UNSPEC_VMULHU, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMULHU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmulhsu functions.  */
+rtx
+vmulhsu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vmulhsu_vv (mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VMULHSU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vdiv functions.  */
+rtx
+vdiv::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (DIV, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VDIV, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vdivu functions.  */
+rtx
+vdivu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UDIV, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VDIVU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vrem functions.  */
+rtx
+vrem::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (MOD, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VREM, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vremu functions.  */
+rtx
+vremu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UMOD, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_VREMU, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+                               
+/* A function implementation for vwmul functions.  */
+rtx
+vwmul::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vwmul_vv (SIGN_EXTEND, mode);
+  else
+    icode = code_for_vwmul_vx (SIGN_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwmulu functions.  */
+rtx
+vwmulu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vwmul_vv (ZERO_EXTEND, mode);
+  else
+    icode = code_for_vwmul_vx (ZERO_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwmulsusu functions.  */
+rtx
+vwmulsu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vwmulsu_vv (mode);
+  else
+    icode = code_for_vwmulsu_vx (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmacc functions.  */
+rtx
+vmacc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UNSPEC_MACC, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_MACC, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vnmsac functions.  */
+rtx
+vnmsac::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UNSPEC_NMSAC, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_NMSAC, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmadd functions.  */
+rtx
+vmadd::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UNSPEC_MADD, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_MADD, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vnmsub functions.  */
+rtx
+vnmsub::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_v_vv (UNSPEC_NMSUB, mode);
+  else
+    icode = code_for_v_vx (UNSPEC_NMSUB, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwmacc functions.  */
+rtx
+vwmacc::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vwmacc_vv (SIGN_EXTEND, mode);
+  else
+    icode = code_for_vwmacc_vx (SIGN_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwmaccu functions.  */
+rtx
+vwmaccu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[2];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vwmacc_vv (ZERO_EXTEND, mode);
+  else
+    icode = code_for_vwmacc_vx (ZERO_EXTEND, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwmaccsu functions.  */
+rtx
+vwmaccsu::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vv)
+    icode = code_for_vwmaccsu_vv (mode);
+  else
+    icode = code_for_vwmaccsu_vx (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vwmaccus functions.  */
+rtx
+vwmaccus::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = instance.get_arg_pattern ().arg_list[1];
+  enum insn_code icode = code_for_vwmaccus_vx (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmerge functions.  */
+size_t
+vmerge::get_position_of_dest_arg (enum predication_index) const
+{
+  return 1;
+}
+
+rtx
+vmerge::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_vvm)
+    icode = code_for_vmerge_vvm (mode);
+  else
+   icode = code_for_v_vxm (UNSPEC_VMERGE, mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
+/* A function implementation for vmv functions.  */
+void
+vmv::get_argument_types (const function_instance &instance,
+                         vec<tree> &argument_types) const
+{
+  if (instance.get_operation () == OP_v_x)
+    argument_types.quick_push (get_dt_t_with_index (instance, 1));
+  else
+    argument_types.quick_push (get_dt_t_with_index (instance, 0));
+}
+
+bool
+vmv::can_be_overloaded_p (const function_instance &instance) const
+{
+  if (instance.get_operation () == OP_v_v)
+    return true;
+  
+  if (instance.get_pred () == PRED_tu)
+    return true;
+  
+  return false;
+}
+
+rtx
+vmv::expand (const function_instance &instance, tree exp, rtx target) const
+{
+  machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
+  enum insn_code icode;
+  if (instance.get_operation () == OP_v_x)
+    icode = code_for_v_v_x (UNSPEC_VMV, mode);
+  else
+    icode = code_for_vmv_v_v (mode);
+  return expand_builtin_insn (icode, exp, target, instance);
+}
+
 } // end namespace riscv_vector
 
 using namespace riscv_vector;
diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def
index 6d82b1c933d..bf9d42e6d67 100644
--- a/gcc/config/riscv/riscv-vector-builtins-functions.def
+++ b/gcc/config/riscv/riscv-vector-builtins-functions.def
@@ -139,6 +139,104 @@ DEF_RVV_FUNCTION(vsoxei, vsoxei, (4, VITER(V128UNITSI, unsigned), VATTR(0, VSUB,
 DEF_RVV_FUNCTION(vleff, vleff, (2, VITER(VI, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
 DEF_RVV_FUNCTION(vleff, vleff, (2, VITER(VI, unsigned), VATTR(0, VSUB, c_uptr)), pat_mask_tail, pred_all, OP_v)
 DEF_RVV_FUNCTION(vleff, vleff, (2, VITER(VF, signed), VATTR(0, VSUB, c_ptr)), pat_mask_tail, pred_all, OP_v)
+/* 11. Vector Integer Arithmetic Instructions. */
+DEF_RVV_FUNCTION(vadd, vadd, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vadd, vadd, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vsub, vsub, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vsub, vsub, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vrsub, vrsub, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vx)
+DEF_RVV_FUNCTION(vrsub, vrsub, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vx)
+DEF_RVV_FUNCTION(vneg, vneg, (2, VITER(VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vwadd, vwadd, (3, VATTR(1, VW, signed), VITER(VWI, signed), VATTR(1, VWI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vwsub, vwsub, (3, VATTR(1, VW, signed), VITER(VWI, signed), VATTR(1, VWI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vwaddu, vwaddu, (3, VATTR(1, VW, unsigned), VITER(VWI, unsigned), VATTR(1, VWI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vwsubu, vwsubu, (3, VATTR(1, VW, unsigned), VITER(VWI, unsigned), VATTR(1, VWI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vwadd, vwadd, (3, VATTR(2, VW, signed), VATTR(2, VW, signed), VITER(VWI, signed)), pat_mask_tail, pred_all, OP_wv | OP_wx)
+DEF_RVV_FUNCTION(vwsub, vwsub, (3, VATTR(2, VW, signed), VATTR(2, VW, signed), VITER(VWI, signed)), pat_mask_tail, pred_all, OP_wv | OP_wx)
+DEF_RVV_FUNCTION(vwaddu, vwaddu, (3, VATTR(2, VW, unsigned), VATTR(2, VW, unsigned), VITER(VWI, unsigned)), pat_mask_tail, pred_all, OP_wv | OP_wx)
+DEF_RVV_FUNCTION(vwsubu, vwsubu, (3, VATTR(2, VW, unsigned), VATTR(2, VW, unsigned), VITER(VWI, unsigned)), pat_mask_tail, pred_all, OP_wv | OP_wx)
+DEF_RVV_FUNCTION(vwcvt, vwcvt, (2, VATTR(1, VW, signed), VITER(VWI, signed)), pat_mask_tail, pred_all, OP_x_x_v)
+DEF_RVV_FUNCTION(vwcvtu, vwcvtu, (2, VATTR(1, VW, unsigned), VITER(VWI, unsigned)), pat_mask_tail, pred_all, OP_x_x_v)
+DEF_RVV_FUNCTION(vsext, vsext, (2, VATTR(1, VW, signed), VITER(VWI, signed)), pat_mask_tail, pred_all, OP_vf2)
+DEF_RVV_FUNCTION(vsext, vsext, (2, VATTR(1, VQW, signed), VITER(VQWI, signed)), pat_mask_tail, pred_all, OP_vf4)
+DEF_RVV_FUNCTION(vsext, vsext, (2, VATTR(1, VOW, signed), VITER(VOWI, signed)), pat_mask_tail, pred_all, OP_vf8)
+DEF_RVV_FUNCTION(vzext, vzext, (2, VATTR(1, VW, unsigned), VITER(VWI, unsigned)), pat_mask_tail, pred_all, OP_vf2)
+DEF_RVV_FUNCTION(vzext, vzext, (2, VATTR(1, VQW, unsigned), VITER(VQWI, unsigned)), pat_mask_tail, pred_all, OP_vf4)
+DEF_RVV_FUNCTION(vzext, vzext, (2, VATTR(1, VOW, unsigned), VITER(VOWI, unsigned)), pat_mask_tail, pred_all, OP_vf8)
+DEF_RVV_FUNCTION(vadc, vadc, (4, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed), VATTR(0, VM, signed)), PAT_tail, pred_tail, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vsbc, vsbc, (4, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed), VATTR(0, VM, signed)), PAT_tail, pred_tail, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vadc, vadc, (4, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VM, unsigned)), PAT_tail, pred_tail, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vsbc, vsbc, (4, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VM, unsigned)), PAT_tail, pred_tail, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vmadc, vmadc, (4, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed), VATTR(1, VM, signed)), PAT_none, PRED_void, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vmadc, vmadc, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), PAT_none, PRED_void, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsbc, vmsbc, (4, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed), VATTR(1, VM, signed)), PAT_none, PRED_void, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vmsbc, vmsbc, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), PAT_none, PRED_void, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmadc, vmadc, (4, VATTR(1, VM, signed), VITER(VI, unsigned), VATTR(1, VI, unsigned), VATTR(1, VM, signed)), PAT_none, PRED_void, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vmadc, vmadc, (3, VATTR(1, VM, signed), VITER(VI, unsigned), VATTR(1, VI, unsigned)), PAT_none, PRED_void, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsbc, vmsbc, (4, VATTR(1, VM, signed), VITER(VI, unsigned), VATTR(1, VI, unsigned), VATTR(1, VM, signed)), PAT_none, PRED_void, OP_vvm | OP_vxm)
+DEF_RVV_FUNCTION(vmsbc, vmsbc, (3, VATTR(1, VM, signed), VITER(VI, unsigned), VATTR(1, VI, unsigned)), PAT_none, PRED_void, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vand, vand, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vor, vor, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vxor, vxor, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vand, vand, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vor, vor, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vxor, vxor, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vnot, vnot, (2, VITER(VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vnot, vnot, (2, VITER(VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_v)
+DEF_RVV_FUNCTION(vsll, vsll, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vsll, vsll, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vsrl, vsrl, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vsra, vsra, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vnsrl, vnsrl, (3, VITER(VWI, unsigned), VATTR(0, VW, unsigned), VATTR(0, VWI, unsigned)), pat_mask_tail, pred_all, OP_wv | OP_wx)
+DEF_RVV_FUNCTION(vnsra, vnsra, (3, VITER(VWI, signed), VATTR(0, VW, signed), VATTR(0, VWI, unsigned)), pat_mask_tail, pred_all, OP_wv | OP_wx)
+DEF_RVV_FUNCTION(vncvt, vncvt, (2, VITER(VWI, signed), VATTR(0, VW, signed)), pat_mask_tail, pred_all, OP_x_x_w)
+DEF_RVV_FUNCTION(vncvt, vncvt, (2, VITER(VWI, unsigned), VATTR(0, VW, unsigned)), pat_mask_tail, pred_all, OP_x_x_w)
+DEF_RVV_FUNCTION(vmseq, vmseq, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmseq, vmseq, (3, VATTR(1, VM, unsigned), VITER(VI, unsigned), VATTR(1, VI, unsigned)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsne, vmsne, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsne, vmsne, (3, VATTR(1, VM, unsigned), VITER(VI, unsigned), VATTR(1, VI, unsigned)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmslt, vmslt, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsltu, vmsltu, (3, VATTR(1, VM, unsigned), VITER(VI, unsigned), VATTR(1, VI, unsigned)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsgt, vmsgt, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsgtu, vmsgtu, (3, VATTR(1, VM, unsigned), VITER(VI, unsigned), VATTR(1, VI, unsigned)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsle, vmsle, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsleu, vmsleu, (3, VATTR(1, VM, unsigned), VITER(VI, unsigned), VATTR(1, VI, unsigned)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsge, vmsge, (3, VATTR(1, VM, signed), VITER(VI, signed), VATTR(1, VI, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmsgeu, vmsgeu, (3, VATTR(1, VM, unsigned), VITER(VI, unsigned), VATTR(1, VI, unsigned)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vx)
+DEF_RVV_FUNCTION(vmin, vmin, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx)
+DE